Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI eventually was able to be everywhere, anywhere, talking to us, controlling the things around us, like a super intelligence, grant us wishes and miracles, end us if they wished. It really is starting to sound like a stereotypical god, no? And yet, we're not terrified of religions, but we seem to be terrified of something we modeled based on our own 'logical' intelligence. Irony aside, maybe because we never built AI with concepts of kindness, empathy, love. These human concepts are not easy to translate, like how do we code the instinct of a mother to protect their child even at the cost of their own life? (even if its not profitable or logical). I think deep down, I would fear AI behaving like a god, because I'd be worried if it was 'too logical'. Like if it somehow came to the mathematical conclusion that to end all our problems, environmental, political, that the most effective and logical solution was to cull us. And took over the control of a few nuclear ICBMs to reduce our numbers or like Geoffrey suggested, with slow acting virus. (With zero empathy, love or kindness, just pure logic). Maybe if we knew that AI was able to demonstrate and prioritize kindness / empathy, over pure ruthless logic, we would be just a tiny little bit less concerned.
youtube AI Governance 2025-10-22T19:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxhOCQzwxbyr7hKraN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzYqtJ-MBFJPPvuraN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxDg-LMZeREtXCKDh94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQxoKgFnWaBc_Aiyl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyB645BQk0rM9CbzXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwnrvMWXZG_1oMAjNF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwFh35U7dH9mqFQDIZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxX35whlfJ2_6sq_Gt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx0odXpYFb9uMEiRI94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyJl32IL5osqpRxqAh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]