Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't like the idea of AI being anywhere near the US "Ministry of War" or the Pentagon. AI makes mistakes far more often than we are being told and it enacts them instantaneously, far outpacing our ability to intervene. What if it launched a nuclear weapon by mistake?
youtube 2026-03-02T20:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzOFC8iVRt47fDElV94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"approval"}, {"id":"ytc_UgzoEmwn6Iyl6KyyAr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwPrlDfdj4D7wZf33l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwar9lbhQD_z-1qksV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxXt3x7WhhJHe5cdLJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGJ7aOL-sCq5doCjJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzKTlVzhI0rj9cXfhZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxwTm3T-pfsVLa5tNV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwa1oENs33eCP9A2aN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz6BIUf5AOR_vWrTdN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"} ]