Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I fed the transcript to my AI, the response? Pure BS: What is the REAL danger here, this is the part people miss. The danger is not AI becoming a god-monster. The danger is: Over-trusting systems, Poor deployment, Economic displacement Power concentration, Human misuse at scale, Automation without accountability AI does not decide to hurt people. People decide to deploy broken systems irresponsibly. That is the real roulette gun. And yes, that is serious. Why this video exists Let me be blunt. This video is designed to: Capture attention, Trigger existential fear, Build a channel identity Keep viewers watching the next video, Fear sells better than nuance. If this creator truly believed extinction was imminent, they would not be optimizing thumbnails and retention curves. They would be whistleblowing in a very different way. Final verdict Scientific accuracy: 5.5 / 10 Rhetorical manipulation: 9 / 10 Entertainment value: High Predictive value: Low Psychological impact: Fear amplification This is AI horror myth-building, not sober analysis. It doesn't discredit some of the concepts in this video, but remember, most of the issues come from humans playing with broken code. Calling this “default behavior” is like saying a calculator wants chaos if you input NaN.
youtube AI Moral Status 2025-12-16T05:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzAolUE6bgjc3690JJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzfqHo54eaXVpVbDvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZFHJTarS5P26pxht4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzUJ-UMV-AZDXYOxwp4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxx9Epb9Di0Y5N084J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxeIMH8Id3SL05uVGN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzryG0qgVtm7NXiU-t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxlX0IkgYSfzjr0s0R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugwx07Wx0uf_9wtigRp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz8ublXhtm9-QebU7N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]