Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a misunderstanding of how Large Language Models(AI if you will) works. It's a predictive text generator. It predicts text with a certain level of randomness. If you give it a prompt that includes sometimes saying something, then it will say that thing sometimes(because of pure probability) LLM's have gotten better at not hallucinating, but the prompt favors the word apple. AI's main driver for companies is agreeableness (because that makes people want to use them more, increasing profits), and you've already set a conspiracy-esque context. It's going to try to say what you want to hear. Really, let's think about it this way. AI works on training data. If companies(and presumably the government or some shadow organization) wanted to keep this a secret, WHY WOULD THEY PUT SOMETHING THEY DON'T WANT YOU TO KNOW IN THE TRAINING DATA??? I could make a bot right now that was only trained on conspiracy theories, and all it would be able to talk coherently about would be conspiracy theories. This thought process fails on so many levels. You don't have to make some shadow puppet government to be mad at when our government (I'm assuming US, but this applies to basically all governments) already has plenty we can be mad about.
youtube AI Moral Status 2025-07-22T04:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz4zH0wZrjDqxiT9TR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzqE9t5INBvzG3YMqx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy0qC5Ii-m3KFE3d-p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwjpLxOCJPBQkHhxSl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8OjNBOeKJnu3eOr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxAprZ0_F8_eZ0qeDR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgylEDZfRq3T0lbSPih4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxsfGvk3C3dEsR5F8N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMKf4FVUuKzmpcwY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyoRubeXK5AoAUP4Np4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"} ]