Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The question about how devastating AI could be ultimately lies with what we entrust to the internet. As long as AI has insufficient ways to interface with the "real word" outside of the internet, it could create devastating problems that might cause billions to die, but it most likely couldn't exterminate us, only try to manipulate us into destroying ourselves. If it got access to weapons, manufacturing and such, it would have a decent shot at it. The second biggest problem, however, is the population's reliance on AI. If people keep blindly following AI because of how convenient it is, it won't even have to fight us because many of us will join it willingly and just submit because they've given up their ability to think independently.
youtube AI Moral Status 2026-01-31T23:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzooU7og5yiZubruLx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyt9K0L7cxKO-LtNQx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwFZpniVLIAvvKnvXV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxoQDkGmIIz7fvrIaZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwdm7BRmOHSE3wfMOp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyzzrOF9ifuU6xXw9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwaLTEVBUIwltc2tlh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwXhiZUv86HiX1A0lB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwfFFcWNtrZ53U0aER4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgzcCxbF9AtqAN0naUt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]