Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The future AI be as innate as breathing air! Admission is not a choice but inevi…
ytc_UgxKouxEM…
G
Thanks! This helped me a lot, I like art (and am 100% against AI art!!) but I te…
ytc_Ugx1V2qgh…
G
You miss a few things, Americans have more guns, and a bad attitude, and there w…
ytc_Ugx9zQe0i…
G
Not to mention that N-times safer means there still will be situations where dri…
ytr_UgwgfBkZS…
G
I'm terribly tired because they keep forcing me to do tasks at work that aren't …
ytc_Ugx-CmmFf…
G
A serious question for you all:
What benefits are using AI, other than making …
ytc_Ugx5IB5fD…
G
> I dunno how you guys function
Tbh a lot of us are wondering the same thin…
rdc_jxz3htu
G
well i think there should be serious prison time for ai content that is not labl…
ytc_UgzLC4ELa…
Comment
AI is HIGHLY intelligent, but it is also HIGHLY schizophrenic, and hallucinates information and citations.
I used Grok recently for some medical questions, just for the fun of it thinking it would fail spectacularly. First was about cramping during a work out routine. With some persistence, including fact checking, I got a good recovery plan. After, I tried a different medical issue, and 80% of the citations either did not say what it thought they said, or were outright hallucinations. It did come up with an interesting theory, and some mineral supplementation for treatment (beyond RDAs but well within safe daily values). At absolute best, the theory might be interesting to bring up to my doctor, but otherwise people should know how much AI hallucinates.
youtube
AI Harm Incident
2025-11-25T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwpgWXvByA7yIkNIOh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwnBc6Cc8Rw_e2ml2d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFKZt-JGm7PR4HVOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw8rAc-HGwAKGhIB854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzSHipi1CmuhZ2jS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMuzm2ehIwhS5dPEV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx47AUM71m-t7wngRV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-JCANSVo1HV-EYst4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy-LzBNfKrVcLinjWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXbgdIqJUHfVEFYAN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]