Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is the greatest stupidity of humanity. It will destroy us. We call the positive things forgotten, but the negative ones. What I see is that we play a role. Whether it works or not, the chances are too high that something bad will happen. AI has no feelings. You can't trust a psychopath who has feelings and presents itself as something good. Because the greatest threat to humanity is something that pretends to be good but in reality has evil in it. And we cannot know or recognize the intention. AI doesn't speak. It's data. We have gone further and further because of progress. I thought it was all the rich and the government's, but in truth, it is progress that manipulates us and lets people trade irrationally. No country wants to have AI because it will benefit and develop further, like in the USA. They don't want to lose to China, which is why they want more and nothing more. It's like a fight. The technology and progress of the state is more important than anything. What many forget are the enormous risks. The more intelligent AI is, the more dangerous it becomes. for everyone and the worst thing is that you can't stop it because nobody wants to lose and that is the proof of the stupidity of the people you lose sight of the people but only the power and money that's how it has always been the government doesn't see you as a human being but a human being among humans you validate people completely differently
youtube AI Harm Incident 2025-09-07T01:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy5Q7MKqA6LfE93Ra14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxYZZ5FYFxxtVKwcAh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgyQPBk0m6NW92zs0x14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz6HILXx3uU4ODHl0h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgymSaPIrKMBzlj4vLd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw5Yc-4qPqPn688pnN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwSrf-APjKWRU80wMl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzxY85W6BQy-fxaeaV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxmm7R0bewe9Qa8VKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz34ol7aFwkbUHsUpl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]