Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's possible that if automation gets to that level, a post-scarcity system coul…
ytc_UgxszBGJr…
G
Seriously what is your tech stack and project that AI does it right the first ti…
rdc_ohzhtsv
G
If you have an internally developed tool it's fine. The problem is that ChatGPT …
rdc_l56cm17
G
I hope they legislate that anything built with an AI must be identified. That w…
ytc_UgzQVo8sN…
G
we havent lost ccontroll of antithing thers no AI superintelligence sad you spre…
ytc_UgzrtVMff…
G
NO ONE I know asked for AI. Yet it’s being shoved on almost every aspect of our …
ytc_UgyH9I9Jk…
G
Yes the truth is these are the Aboriginal Indigenous Copper color people stolen …
ytr_Ugx-9Nvlm…
G
Waymo is the Betamax of autonomous ride-hailing service. Tesla will be scaling o…
ytc_UgxJm_YUx…
Comment
I believe we should not call LML giving wrong answers "hallucination".
Following the opinion of psychology podcaster Dr. Honda, for an entity to have hallucination it needs to have a consciousness, a mind (and something comparable to dreams) first. Complex things, that the LML's evidently do not possess. Furthermore following my own opinion as a current undergrad student of English linguistics, I believe that by using this word, we might (sub)consciously grant the LML's the baseline properties to have hallucinations aka. a mind. In the long run, that might add to people overestimating the skills that this generation of AI has. Also, it covers up the LML's great rate of misinformation by not calling them what they are: wrong information.
youtube
2026-01-09T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwYHWbqZ54ejGxUq-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz38yoNwCGprM9Gr3R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz-hK1LOR8_MRDn6Lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzqhyYrSJZgq10c9mp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyuJcq3hbENEtLnvyl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTtnbsDrCRj52umzZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxKjiXlPpsmKLuOdGp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyvLH7rbAIn3V3ImIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw72l4Fqx5K8MOcuTV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyr-Dl4q-EPkMB37-F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]