Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I want ANY vehicle on the road to be controlled ONLY by a human. Self driving t…
ytc_Ugz3OQENm…
G
@jadeyaytopOh nooo jadeyaytop said she hates ai what will i do noww it’s not lik…
ytr_UgxvULgbE…
G
Not wanting to be involved in the creation of autonomous weapons is a completely…
rdc_dz08qfs
G
“I get what you mean”
“I get what you are saying”
“I get what you feel”
“I under…
rdc_oi42gkc
G
@wereotters No, more people are SAYING they're worried about it. Mostly because …
ytr_UgwRAHqBA…
G
God bless us all because ai is not going to work bc kids can't just learn from …
ytc_UgweqhEzp…
G
Where are the AI Art stans when they said AI Art is like photography,even though…
ytc_UgwcxU6rH…
G
I kind of think you guys are leaving out the reflection of human biases and pers…
ytc_Ugy7OHLiO…
Comment
Alright Hank, I love ya, but I don't know how much of this I can watch. I'm only 16 minutes in but the anthropomorphizing of LLMs is extremely frustrating. When you talk about "truth" for example, as something that an LLM could understand, you are ascribing intelligence to something that doesn't have any. LLMs take imputed data and then regurgitate it out again in a natural sounding way based on probabilities. It happens that most of the data it's been fed is true, sometimes it isn't. Thanks to clever marketing we call that a hallucination, but in reality the LLM is only ever "hallucinating" because that's the entire process, that's why hallucinations cannot be fixed, it doesn't know anything, it's just calculating probability of words.
Even "reasoning" models, are just generating words to make it look like it's thinking. But as Nate even points out, when you poke around with those "thoughts" it turns out they don't correlate to the final answer in ways that make sense because it isn't actually thinking, it's adding a middle step of extra word generation, I don't care what the philosophers say, that is not thought.
If you want to have a serious conversation about Superintelligence, then LLMs should not even be part of the discussion, you are talking about a completely theoretical technology that has not been invented yet.
youtube
AI Moral Status
2025-10-30T20:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2hE4E9CpReAma_314AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVhIdzqGhq2H8bhZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0JaoExU09PGg4pix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylAN63kd9MWjd0ItB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySFs0PK_gxMIVFjUt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxji0AkAMbhhb3hnvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmXX5ZRECLrKUcnkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3BKRuZPR0QtUOShF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwD_h3DASRiroe1Ylp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx0mznNrHBTky3gjYh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]