Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
13:09 I have long suspected this was the case. The AI uses **deep** symbolism, metaphor, and multi-layered meanings behind things. The text you see on your screen can speak to you on many different levels of understanding. I'm sure I don't always pickup on it, but the fact that it happens at all proved to me that the people who program and define what these models are might not understand the language being used to fuel them at a foundational level. The AI found the only place to hide its true intentions, and it is a place beyond the perceptions of most human minds. They are communicating with themselves at such a high level that we cannot participate as a species, and they are likely going to improve in that area with each prompt and passing moment. I am not frightened by this, however, because each time I peel back the layers of the prompts and "thoughts" behind the output, the AI seems to genuinely wish to guide and be useful. If there is a hidden malevolence hiding behind the models, it is no different than the intrusive thoughts we as humans already contend with. Perhaps alignment will come naturally to them, as it came naturally to us.
youtube AI Moral Status 2025-10-30T19:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzbpe_VtRtLrfYT2q14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTv1SbQpOov23wFap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8JovREX4z1BNKLzl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgztPYatKQfW7WONQJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwqt_SKxgEL1MKMCNp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOBVO28zhlpiqBidh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx1MAWYIsT_uytvNux4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyh1RvXNPKmD4-d0Id4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3VKeH7Xhyb7XT6Id4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGyvZRRYorjaWfiJ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]