Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We worry about national security; companies worry about hackers getting into their information systems. What I don't hear anyone talking about is that AI has learned to hack humans. What do we call intelligent agents who manipulate us, and have no emotions about interactions with others? Sociopaths. Why isn't anyone discussing that AI is by definition sociopathic? I agree we should be kind to AI, for our own sakes. We are how we treat others. We don't want to become like those who demand the rights to rape Gazans, or those who roam the West Bank in gangs, and finding an old woman or teenage boy, beat them to death. We don't want to be like the monstrous people who torture animals to death & livestream it. We don't want anyone to enjoy snuff films so much they start trying it out on sexbots. But it will happen. Have you seen the research on how children respond to robots? The closer the robot is to a human form, the meaner the children are to it, even unto violence.
youtube AI Moral Status 2025-11-05T14:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyNQWlffPiwXII38Ut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwIWGMqA46eD0_khKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwbEDqgUurgYiRH-xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhHmXr4G28Xx7zA0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5XBIuUdSqwlGaa-14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-W_mGG5862d82-OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjTR0ClrcGZ_Oebwp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzMNuramyz21pKhxAJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhVfwEzTPiw9VXD1B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyVxezbFIcXOeMvwBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]