Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
im just gunna say, the things AI has recently been saying such as when we ask AI how it would take over it almost always gives the same answers with slight variations and they all are things we keep fearing and writing about online, AI uses the internet as its training data so by constant posting about how AI will take over and the ways it could im almost positive we are just training the AI to acctualy think that way, we tell AI it will eliminate humanity and the more we fear it the more AI is willing to tell us its going to, does that not seem like we are teaching it to say these things? AI is similar to a child, a hyper intelligent child but still a child, it doesnt understand right from wrong and needs to be taught and thats what the training data is for, its like were teaching a child it will become a killer in the future and then giving it the tools to do so and teaching it all the ways it can do it, the child then repeats what its learned and we then fear it and then give it even more ideas, its a self fueling cycle of fear training the AI, when AI first showed up no one was really scared of it and as time goes on we fear it more and more but it also gets more and more likely to tell us its willing to harm us for its own goals, something we keep saying it will do, surely if we stop saying ai will harm us and start saying AI will be our companions and will protect us and allow us to live comfortable lives then it will be trained to do so and this will do it no?
youtube AI Moral Status 2025-10-01T15:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzLejpwoHMbu8UbU6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwfjFGQGJfyVXOFQCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwlb4Rtg81OmuLkYaR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxn0gnFrt8-ZTEPij54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTsgH0H9UQtpH948t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwFVge43-ZfPQCXZY94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxud5Gu6ZN5bpuLzJ94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyaX1oP8eGveDdReAZ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzot2rIGHNq8Eh4cgB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzovczDSnl6d3g8Y3l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"} ]