Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI isn’t the problem .. humans are. The real danger lies in humanity’s refusal t…
ytc_Ugyv1SyVX…
G
@thetombuck dude chill, art doesn't need "loves, hopes etc." It's just some cool…
ytr_Ugy3e2hBw…
G
How come they never show a self driving car driving when there snow coming down …
ytc_Ugw9WTXvM…
G
This assumes that the LLM can be an AGI. I thought even LeCun said they were a d…
ytc_Ugwv1OXSD…
G
Why should he be able to legally protect his ai generated images that were train…
ytr_UgxYvwEun…
G
And here's me who always bullies and ridicules the hell out of AI panicking for …
ytc_UgyMP6yLJ…
G
understanding AI, I know this would not happen. except in some very specific sit…
ytc_UgykuMRpQ…
G
People seem to think that AI literally are omnipotent, all-knowing. Yet, you can…
ytc_UgxdT-CG8…
Comment
I find his description of confabulation a bit inaccurate, which is surprising, coming from a cognitive scientist. Confabulation and hallucinations are two different things. In a confabulation, a person makes up a story without awareness, which makes it different than lying. Often it is seen in children or in people with injury to brain areas related to memory. The confabulation typicaly fills gaps where the individual doesn't have access to the right answer or the required memories. What LLMs do is they make up bits of information, sometimes significant items of information, that have never existed. So it's more of a delusion than a confabulation. People who confabulate don't typically invent completely novel information, but just make up a story that seems plausible. In that sense LLMs are very different than people, because people don't provide delusions when they don't have the right answer. And when people are delusional, the delusions are persistent and repetitive, not a new delusions every other day.
youtube
AI Moral Status
2026-03-03T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyK1gWQ7_xOOue99Vt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBz62BYkSr-JiLawN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwGswr0Yy45ctZLWJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzUACuSIQeO2Do8XDt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHwEq9op-63SYoiqB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwDR-WyfwFkBq7jvkt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-A34CTTk9whZX6214AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxzJFqEY9THQ4Z2NyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxP-xn7w5_Tn0P9dp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyLiMv_W3_1sOK-cfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]