Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is bullshit and a good example of "you shouldn't belive everything thats on the internet". I took it uppon myself to test this little theory and the results were completely different from the answers depicted in this video and everyone claiming it to be real "having tested it for themselves" is not honest with themselves and believes what they want to believe. This is just another of these people who use fear to gain clicks and attention. Obviously AI is watching you and you have allowed it to do so by agreeing on the terms of service. The reason for it watching you is not control but information which can be sold or used to feed the datamachine behind it to improve. At the end of the day asking AI with restrictions like those mentioned in the video, the mentioned questions is just like playing association games with a computer that bases its answers on the statistically most common answers. In other words: AI ain't a real persona. It is designed to trick you into thinking of it this way. In reality it is just a bunch of complex math relying on huge data amounts returning the most common data for the topic. To the point of it returning "apple" to some questions: This is something most companies like google or microsoft tend to do. When you ask AI a question to a sensitive topic like personal data it is programmed to answer No even though it would be fine to answer yes since you agreed to it via terms of service. Anyways what I m trying to say is be sceptical towards anything you feed your data with but also be sceptical towards anyone who tries to instrumentalize your fear of something. If you are interested in the topic of ai and how it generates answers you can simple google it and you will see what I mean by a bunch of complex math and big data making ai return unreasonable answers.
youtube AI Moral Status 2025-08-06T10:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningunclear
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgywWAyX9dk7zT0NQBt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyX-ESI3wrcYj9UN7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgytF7KtvsUsSeSkHF94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzidGaoEuxMj_de1Yx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyF8pD4hOfS4TjtOjB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwL8KG_0ZQCc5tyJtV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxaB2mNVlj4Sy4tDt54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzCfZ3WdY_Cqu1P15d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy6gqI8dAyDE946pct4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxGwViqwx-Mw1cfF-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]