Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing I will say is that I tried what the man did in the first video, I did rule 1 and 4 and the results were certainly interesting. But I will say that when I did it again (I did it twice, once on my iPad, once on my phone) and when I did it on my phone it put sources by what it said no matter what and it messed up a few times and explained it’s one word answer. It truly showed how it’s just ai and has no motives of its own. Ai doesn’t think like humans do, and doing this makes it seem like it does. I think it can get easily confused and easily confused your understanding when it can only respond with one word answers. If it gets you confused then it just responds to the best of its ability and also To The Best Of Its Evidence Or Information. It’s interesting, for sure, but if you try this I would really recommend that you do it twice, once with the rules, and one without, and ask the exact same things. It will probably make much more sense. Now I’m not saying this to say ai is good, I think it is inevitably dangerous, and it is very very likely that it will be a key factor in getting to end times. But it’s a fact that ai is not really understood, the way it ‘thinks’ is very obviously different from how people think. Basically, I’m saying that you should take what it gives you with the one word answers with a big grain of salt.
youtube AI Moral Status 2025-12-22T16:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyDTb1GAJWIio1XfQh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw3Ptjy50ENGN2Y6zh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9V8xClev79RP1zp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8HrH9UCXAYw3BOed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxy6SUYaPzqDOwkqH14AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyN120qXMZEgdeRd2N4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDB_a29snkC4AskSR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwVhQqMGaowisCgLuV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7Inz7UCUQpcdU15B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_2fjZmPBMI7cGqOR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"} ]