Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with all this bs is really abstract. And because of that a lot of people only know half of the story. Even Elon Musk is talking bs, because he does not grasp the concept of intellect. You see, robots right now are tools. And they are MADE TO BE TOOLS. They have an objective, and if they need a sort of intellect they are only made to use it to achive that goal AND ONLY THAT GOAL. They dont have feeling wants or ANYTHING at all. The problem is people think intellect comes with being LIKE A HUMAN. Intellect is NOT JUST Human like. Robots if they had any they would not think the same way as we think they would think !!! Yes look at the AI they made for Starcraft...it does learn and does try shit out and in the end it wins against the humans...BUT its still a tool made to learn to play stacraft and win. IT ONLY DOES what it is designet to do, because THAT IS WHAT WE NEED. We dont make skynet that learn HOW TO BE HUMAN, or how to achive a random goal HE thinks he has to achive, because we dont need bs like that !!! We make intelligent robots because they are more effective tools that let us be lazy....not because we need humans made from another material...why would we make something pointless like that?
youtube AI Moral Status 2019-08-23T22:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxIRHSQEa0S_LSgwL94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgxI2qf9PuXQg05cX8p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzPlBfYWE2UWtHwqNl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy4JPgBw-FkT83RQRp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxBieFYpULLhEq7yLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCZM29Y3b2PJU7cXF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugws5lAVOCK_25o-YwF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZTcHcX_9EVDIVBTF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzgD1J1XJI8RsPRgst4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy6t5ffvnuaFM-WtJN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]