Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is HIGHLY intelligent, but it is also HIGHLY schizophrenic, and hallucinates information and citations. I used Grok recently for some medical questions, just for the fun of it thinking it would fail spectacularly. First was about cramping during a work out routine. With some persistence, including fact checking, I got a good recovery plan. After, I tried a different medical issue, and 80% of the citations either did not say what it thought they said, or were outright hallucinations. It did come up with an interesting theory, and some mineral supplementation for treatment (beyond RDAs but well within safe daily values). At absolute best, the theory might be interesting to bring up to my doctor, but otherwise people should know how much AI hallucinates.
youtube AI Harm Incident 2025-11-25T01:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwpgWXvByA7yIkNIOh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwnBc6Cc8Rw_e2ml2d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFKZt-JGm7PR4HVOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw8rAc-HGwAKGhIB854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxzSHipi1CmuhZ2jS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxMuzm2ehIwhS5dPEV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx47AUM71m-t7wngRV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-JCANSVo1HV-EYst4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugy-LzBNfKrVcLinjWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXbgdIqJUHfVEFYAN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]