Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So I find that this man is a very intelligent mind when it comes to technological development. Unfortunately, I think he’s very consumed with that to the point that he has delusions of human/machine likeness. He wants machines to be human so badly that ignores the innate differences. There is a biological essence in living beings that can never be duplicated, but people so passionate about technology want that to happen so desperately that they abandon common sense. It’s essentially the old saying “He’s so smart that he can’t tie his own shoes.” Love, compassion, conscience in terms of emotional right and wrong, empathy, etc., are things that you can program into computers under certain circumstances, but they cannot feel, and never will be able to feel them, and know from that feeling what is right or wrong. For instance, he senses the wrong that AI is going to commit, but when you get him talking about technology, he completely abandons that fear in jubilation of the technology that he is so passionate about. It’s that FEELING that computers can never duplicate, and that should be the fear. Computers will NEVER EVER CARE!
youtube AI Governance 2025-06-25T13:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzYM8NwyoCis42zvLl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQbsfODWU5XpzZkgV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznGm6NQDj9xm53-Gl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzlpBzZuWiIY6AbCLR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwgSJMILOCvFfFZpF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyN0hefIFYjv2lO7TJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzbEYlhdXIQIXsE2kx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxH4fAj6jUO6DETUNp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwXFVLaymu09bSVgld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx_c3fE7uLha_PxKB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]