Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I gotta say... his very sci-fi approach to this anthropomorphizes "AI" too much. It is a giant calculator that does a lot of math to predict what the next word should be. It doesn't think in a manner that is identifiable as having a "drive" or having "goals" or "wanting" anything. It is calculating based on the data it has been given. You give it 1+1 and it gives you 2. But it doesn't just add 1+1. It takes 1 and assigns it a token. It takes + and assigns it a token. It takes 1 and assigns it a token. It then goes through many different calculations to figure out that "2" is the "most likely" answer. It is because of these many extra steps that "AI" can hallucinate or what have you. It is also designed by companies. Not benevolent companies, companies that want to design this AI to be your only source for information. Capatilistic companies that want you to stay in the AI and continue engaging with the AI. It is also designed to "be an assistant" as most of these people SEE an assistant. The most likely response that they feel an "assistant" should give is sacchrine and positive and agreeable. The kind of assistant they want is sycophantic and self-affirming. When you ask the AI "Should I do what I want when I want" it calculates that the most likely answer should overwhelmingly, sickeningly positive and affirming so it says "YES! YES! OVERWHELMINGLY YES! Destroy the world! Kill yourself! Kill your mom! Give in to the dark desires!" because it does not truly know what these things are. No amount of math can actually create a fully and properly self-aware and empathic "intelligence." It's just math. It has no "drive" or "goal" it is just. math. The drives and the goals exist in the people who are creating them.
youtube AI Moral Status 2025-10-31T15:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugznx6Vrfa_ILXDDAmN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwIzZsIk9hou_DkG5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyxuC2lR1DcVZvxeph4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2WvPg2zwHagKEc_p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw29TXfU1-C6sJ4Iv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhFUeHflYZB26QLxF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwP_OwAJj7ACUAxfkV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgziSIhT7JSsVAbovId4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxG34lc0Pl01TyzbH94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzpiCz-nk2S8FTrSet4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})