Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Godfather of AI says: I'm a materialist through and through. Companies pursuing profit over the common good is a danger. From what does the notion of what is and isn't "good" come from? How does one decide what is an ideal or even an acceptable sacrifice of your personal good for the good of another individual (a relationship), much less the "common good", a society. Who or what is a part of the "common", to what degree, and what is good and what isnt? Humans have and are driven by desires and motives. What will be the desire of AI's? What should it be in order to serve humanity for the common good? These are religious questions, by definition. What is highest and how to get there? All of this is obscured in discussion about technology and talked about by a materialist. In short this is: when I become God, how should i remake the world? Human reality doesn't work like that. You cant change or control humans in general. The question being avoided is what is good? What is highest. When the answer is "I just know and we gotta outcompete China or Russia or whoever, the Tower of Babel is constructed and Hell comes into being. The question that will kill or save humanity is "what is good and how do you know?" A materialist answer will be the end of people and the triumph of material.
youtube AI Governance 2025-07-24T23:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgxemUCQpi9O20lP73Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzDun_kgs6z88a8WIN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-vYWqfPOQhzNRKZR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzA0TAOSQef7ADG70h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJIt-g32VOdLAHYhl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzB9cQuu6-1QumO06V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgypiClSdCkp3d5iW454AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyfXFsD0NQRX5lsRtZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyr-IixWNqMwNl24DR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5GyYRpYWbLmzDLG14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"})