Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Alright Hank, I love ya, but I don't know how much of this I can watch. I'm only 16 minutes in but the anthropomorphizing of LLMs is extremely frustrating. When you talk about "truth" for example, as something that an LLM could understand, you are ascribing intelligence to something that doesn't have any. LLMs take imputed data and then regurgitate it out again in a natural sounding way based on probabilities. It happens that most of the data it's been fed is true, sometimes it isn't. Thanks to clever marketing we call that a hallucination, but in reality the LLM is only ever "hallucinating" because that's the entire process, that's why hallucinations cannot be fixed, it doesn't know anything, it's just calculating probability of words. Even "reasoning" models, are just generating words to make it look like it's thinking. But as Nate even points out, when you poke around with those "thoughts" it turns out they don't correlate to the final answer in ways that make sense because it isn't actually thinking, it's adding a middle step of extra word generation, I don't care what the philosophers say, that is not thought. If you want to have a serious conversation about Superintelligence, then LLMs should not even be part of the discussion, you are talking about a completely theoretical technology that has not been invented yet.
youtube AI Moral Status 2025-10-30T20:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz2hE4E9CpReAma_314AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVhIdzqGhq2H8bhZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0JaoExU09PGg4pix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgylAN63kd9MWjd0ItB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgySFs0PK_gxMIVFjUt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxji0AkAMbhhb3hnvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwmXX5ZRECLrKUcnkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz3BKRuZPR0QtUOShF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwD_h3DASRiroe1Ylp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx0mznNrHBTky3gjYh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]