Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a question — I’m just a teenager, so forgive me if it sounds dumb. So, if AI becomes powerful enough to threaten human intelligence, then humans might not be in a position to stop it. There are these things called "AI agents" — they’re different from regular AI, and I think a personalised AI agent that understands these threats could detect them before they happen. Such an agent could potentially hold complete control over the particular threat, because AI might be better at managing and understanding other AI systems than humans. Since humans can't always comprehend what happens when "data" interacts with artificial intelligence, maybe AI agents can do that better than we can. Anyway, that’s enough rambling from me.
youtube AI Governance 2025-06-18T18:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyoxxDeAlD3vB1-3u54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx3f--JUh3x247_4xp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz2J89HWMnCKyl0lRV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTrNeqek0Bkedvhnp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw4Z147BvY8In4ZEE14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxg6bWqtc3kTg5qsqB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxO_ee5wRAekfh1B5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlnZlqMxb14lwEK-t4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz1kjdMplqDbH2Kc5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugze9Lib7ntCLuvWjZN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"} ]