Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At this current point, nobody has developed an actual conscious AI that can think for itself. The things said here were likely due to misinterpretation and being given too much power to work with. As of now, AI is really just complex pattern recognition. If we ever wanted to shut down AI, all it would take would be an axe and a server room.
youtube AI Harm Incident 2025-09-12T02:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyDvrUM_CjHGW8GmK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxlwSrBylmxVvkjAeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwmGpBsXxbFcjXff5J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQwVf0LsEB7fC3xBl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxuVuK9qTacj8joICJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwpJwVdgX32nIvjS694AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy4LbCrE2kkGbCJEaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugy01I7I5GlxWyaBC7d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwdJvd7EfNQdxC5bzt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzOqyyU9Vymm_6K3fN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]