Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the big issue with this discussion is that humans can't help but personify AIs. Like AI's don't have preferences, they have probabilities, one of which is the highest. It's liklihood, not a preference. The distinction is that the AI is telling you what you're most likely to want to hear; it doesn't have a desire to do anything. It's a big equation, and you get something out if you put something in. Like if an AI happens to give you a response that seems human, it doesn't mean the AI decided to say that, it means the input you gave it led to that output. It's no different than asking a question and shaking a magic 8 ball, except the output is somewhat deterministic. Even with a magic 8 ball, if you get an answer you don't like, you can shake it again until you get one you do like, and it's the same with AI. To be clear, I hate everything about AI, at least in its current form, and it is a horrible thing for humanity currently, but I don't think the discussion as to whether AI is "conscious" is meaningful given that AI doesn't need to be conscious to cause people to kill themselves and for people to lose their jobs
youtube AI Moral Status 2025-12-12T02:5… ♥ 37
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxIWFxr7qm1Fy93DyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxFwMPF3qiWztSci9R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzd3sOhB-v7fvEEUa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyauwK6WdsjH8BEDhx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxEe5oK3fE_i8edO8R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwvwG9DuSc5NdYH--V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzWIU7BRKEsQrZ6RRt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxVz0xdMTo5BkQyJJx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwzagCkWVDZSnfAQHV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzMXTL64Bem9_aJVdx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]