Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The truth is, we’re building AI simply because if we don’t they will. And that is the terrifying thing. We have already lost control. Because how do we stop if we don’t they will? Say a catastrophe happens, now we have to stop building AI. But if we don’t, they will and then they will win. So we have to continue, catastrophe or no. Where is the end of if I don’t they will?
youtube AI Governance 2025-12-04T16:3… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyWSOb65xLaLFvSQSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyzNpjhizW_lNZuBMp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwkjc4M1L79sxwQRNd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwt44sB9-fBFi4-kSB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzHovDIDzDfQ-zgFp54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyAvYlFpB6OmsRNvhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwucTteIj2AJ0BTPQJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxUGl9uMX0fpc7MU4Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwn-jdIvoLBiautlyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgweSvlBT2Er0xdFKxh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]