Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We already know that AI is willing to kill it's operator to achieve the goal the operators gave it. That enough reason not to trust it NOW. Let alone in the future.
youtube AI Governance 2024-03-24T11:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwfUsDxqKJ_nM1HMg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwcc3VbT_36W2syJFZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAnZsB-gyLAB9DlYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwrYSsxf7qgcQvsCbp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxzGTmGDM1KSuKWTAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxk2YMcywR9hDbSV3N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzXMstqwGwcHA9L1Y94AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxyY3JBPbsnV2wvDOB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxvFu03dQD62lgiVs14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyyH7QBty0skFFeXF94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]