Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No matter how you look at it, I think we're already royally f**ked. When a sentient AI thinks logically, it will kill us, because humans are a huge threat. Now, if you train AI to have empathy, then it could spare us. But we all know how human emotions work, so when it feels empathy, it also feels anger, so when it's angry, it could wipe us out either way, so.. yeah.. GG.
youtube AI Governance 2025-06-25T12:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzYM8NwyoCis42zvLl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQbsfODWU5XpzZkgV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznGm6NQDj9xm53-Gl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzlpBzZuWiIY6AbCLR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwgSJMILOCvFfFZpF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyN0hefIFYjv2lO7TJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzbEYlhdXIQIXsE2kx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxH4fAj6jUO6DETUNp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwXFVLaymu09bSVgld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx_c3fE7uLha_PxKB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]