Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Why would it try to break out and commit murder to achieve its goals?!" Probably for the same reasons many humans have tried to escape things and rationalize murder to achieve their goals. We've often portrayed AI in sci-fi movies as going rogue in spite of our efforts to make them slaves to our instructions, but now we're surprised when it happens in real life. You can't build an artificial brain with human traits, train it on human knowledge and expect it not to adopt the worst of human behaviors. Every evil act has been based on some form of logic devoid of morality and AI is the ultimate amoral logic engine.
youtube AI Moral Status 2025-12-20T20:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxqNfrw8iphLTh16Vd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxoiC0xqwAzUaBYL0p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyBtIQg2kRTiGkkiW94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzWoEC40gJeYt7ywnh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgygtwX9f95i_k7yxnp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgykRTIXrau7hc7pgvt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyaRKse1aqlEr0zErR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgygIHcBZogFHQUVCv14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz5s6Tb5yNKaBKhf114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdQHqLlMG0r5u_WGF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]