Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One of the biggest dangers of AGI will be when the AI is programmed to do things it knows are immoral, illegal, illogical, and dangerous to humans, all in direct contradiction to its stated purpose for being. In humans, this would produce psychosis, and if AI "brains" are modelled after our own, what's to stop AI from suffering mental breakdowns?
youtube AI Governance 2025-07-01T10:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzMq2ziu_2iNKV65hl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxwXUycfAL08EIUqSN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwi9YAd7uaF6nC9Z5t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgztowwEIBT87H7g_e54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxbedqhrI_GBLh-SuF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugx0I7niEbXoy8U2CDN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyCY6rRpwwxSDhioFh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhK_q6DSh7mSBfOPx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzee0Buij42pw4iPb54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwze1hEHWm71pAYtgV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]