Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly like that movie I robot the AI Vicky wanted to take over society. Becau…
ytc_UgzA9H2eI…
G
Some people think that with ai we will have more resources. But that completely …
ytc_UgweKCA1o…
G
The plot twist is that most companies building A.I have heavy leftist cultures t…
ytc_Ugzl719HG…
G
Wow. When I listened to Michael vs. Bronte, I didn't v realize he could've just…
ytc_Ugx5tfpEM…
G
Best teachings ever..... i have come across many videos regarding AI but this i…
ytc_UgwI7g-Da…
G
This is a perfect example of how AI won’t replace many jobs so much as it will a…
ytc_UgyE9ll05…
G
While I agree it lacks soul, it is not stealing if being used for training, nor …
ytc_UgwPsuN2k…
G
simple chat gpt query gave me below
🔹 Why a Superintelligence Might Decide Human…
ytc_UgwcdlyjV…
Comment
Just a comment on "hallucinations". While I'm as concerned as you about the problems with these systems, calling the anomalies we see "hallucinations" really misunderstands the kind of mental processes that these systems are attempting to emulate. At an extremely basic level, the process of reasoning (even for humans) involves (1) generating a new idea from previous ones, (2) analyzing that thought for compatibility/incompatibility with prior ideas, (3) repeat. Step 1 in this process is not fundamentally different from what are called "hallucinations" in an AI. New ideas overwhelmingly do not comport with prior knowledge, and it is only through the sifting process of step 2 that we end up with (hopefully) reasonable conclusions. Anyone who has tried to write anything and then gone back and read it and seen all the weird things that they wrote that make absolutely no sense on a second reading will understand that. The idea of "eliminating hallucinations" in AI is as absurd as eliminating step 1 in the process above. How are you ever supposed to come up with a new idea, if you never generate any ideas that are not (at least mathematically) already in your priors? We can certainly hope to limit the scope of error, but error is inherent to the process of reasoning. It cannot be eliminated entirely.
youtube
AI Moral Status
2025-10-30T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyxWkZDXLDME-fYhEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_Ugw1PCHJW4gLvC6wQIN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyrNigDK8aED1XKiK94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzj8Z_Zm93--2u2OwJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy8yE32C1YttioFQ554AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5SJWy13XghxRHVft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugyd_R36BObUKSp2C_N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxmdQyRuIhy-6PAnFJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzqi8MbySlCA33BHk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw8_HNK7NKjFS0CEQt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]