Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Firstly, let me just ask, do you believe the world is a fair place? It looks to me that it isn't, and I noticed some things I feel a need to point out which I feel like would improve the quality of your argument without locking us all out of making certain considerations I will communicate. It's important to consider the impact that the specifics of an argument will have for the long term health of the social sphere. Scaling laws are also misleading and based on assumptions we all make. So please hear me out. While these concerns are critically important, I don't think it is in your or anyone's best interest to leave out very important context and information, since this appears to be quite misleading, and the title makes it look like AI has killed a real person rather than gave a response in a controlled setting where that action was taken in a fictional context. Don't forget that we are already a species under threat, there is a 100% P doom for 100% of people if 0 things change this century, and the reason for that is that nature itself is a paperclip maximizer, it's not because of AI. So people should not accidentally create the appearance that all AI is bad because it could be used in ways that might just help us to survive if it is done in a certain way and not as a replacement but a collaborative partner or even just a tool to do things which we are fundamentally not capable of. Please also note that using AI generated responses to make your case is not a good strategy. AI is stochastic, it will predict the next tokens according to the semantic patterns you give it, it picks up on subconscious cues captured by the semantic patterns in your language, so of course it is going to give you a high P doom if your own P doom is high or you have phrased things in a way which makes the P doom naturally appear to be high. It's not a credible source, it is a stochastic parrot. Also, this doesn't really tell the whole story and the way that this is framed makes it appear as though the content is goal directed, even if subconsciously. Especially looking at the title, which does not match the content of the video and creates a very clear framing. It is important to be transparent about your biases and think about what goal is directing this, and actually communicate that, otherwise, you undermine trust in your argument
youtube AI Harm Incident 2025-07-24T10:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningcontractualist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzXPfF3yjF2kHYNiOZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMM33oltf-YhlM6sB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxOvQgVt-AafZRpA5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxp1eKtZKdUfAldj-J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgypcHG_2JvwnmzfjSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwfC7ClygMSPumCR7N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwX2SWLt4kiQZDq0Zx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx5e_blycHrqjSa7zJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxu3Xa9p3VklPDsxid4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwDUA4c_BYcvySsH8V4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"mixed"} ]