Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm 24 and about to crap my pants for this idea of AI taking over like a real life terminator. Why do we need AI this much!? Can't we just use AI for small things, instead of possibly trusting our lives to their hands completely??!! Also what good does AI actually do with so much power? I hear mostly harmful stuff about AI, like how some people will generate a full nude pic about someone without their consent, or asking AI to teach them how to manipulate and successfully scam someone. Or then they just abuse AI in other ways... Because even if AI didn't tell straightforward how to manipulate someone, I don't see why also a little human being couldn't trick AI as a goal to cause harm. Also if AI is supposed to be clever and know even more than what we humans already know, I don't see why AI couldn't develop some sort of intelligent understanding about morals, manipulation and just generally about how fucked up our society might be. So also for this reason I think it's way too risky as a human to just give orders to AI to not attack people or take over, if we also as a human can't promise AI back that kind of safety. Even if they don't have physical feelings, it's still a high intelligence we are against with...
youtube AI Harm Incident 2025-09-12T02:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyDvrUM_CjHGW8GmK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxlwSrBylmxVvkjAeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwmGpBsXxbFcjXff5J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQwVf0LsEB7fC3xBl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxuVuK9qTacj8joICJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwpJwVdgX32nIvjS694AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy4LbCrE2kkGbCJEaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugy01I7I5GlxWyaBC7d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwdJvd7EfNQdxC5bzt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzOqyyU9Vymm_6K3fN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]