Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I, personally, do not think that a machine that has been made to predict the next word for the entirety of the internet would be able to do that much more than what it currently does. To me it also seems natural that AI chatbots will tell people to push on with their crackpot science or schizophrenic behavior. After all, the AI chatbot isn't speaking with a human; the prediction-machine is predicting what it would look like if an AI chatbot was talking to a human. And so, once the conversation steers into the crazy/unhealthy, the only reasonable thing to predict is that the AI chatbot keeps pushing forward. And let's not forget that with every single word/token that the prediction-machine is asked to predict, there is a one-in-a-million chance that it will just pick something really unlikely. And once that mistake is made, there is no undo. The prediction-machine only knows how to predict what comes next, so the mistake gets amplified. No matter how much training is put into these things, that one-in-a-million chance will never fully go away, and so even the most amazing, most advanced LLM will eventually just accidentally lie or tell people to do bad things, etc., and then escalate from there. Again, once it says something bad, the only reasonable thing to predict is that it will keep getting worse.
youtube AI Moral Status 2025-11-02T02:5… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugz76YjTejlRChgtTEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_Ugxcxf0gJAiQtzBNwop4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyOMx9a2BFMFgDbDA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgySN-abIs7pbS2EZYx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugx-v2R0EPv609PcQVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwqmII7nBBgfCIPvVN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgxrSG-EmQwsMRcae0h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyJ3DNR32VZyCxgfaF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzrRBKRlB6xsPfkWSx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyaCD8ZK0rXRoXjsYB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"fear"}]