Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you are afraid of AI, all you have to do is unplug the power cable and AI wil…
ytc_Ugy1oWoVM…
G
Honestly, AI should stick to being for things like self checkout at retail store…
ytc_Ugz2t5PB3…
G
We're glad you enjoyed the interaction with Johnny Cab! If you want to see more …
ytr_Ugzxcxbdf…
G
AI is the reincarnation of the devil . He’s finally made his way to all of man k…
ytc_UgwmaYgbE…
G
So smart they’re stupid! Can someone please train Ai to destroy the evil elites?…
ytc_UgwJgHnYK…
G
The mere thought of what AI could make possible is incentive enough to invest tr…
ytc_UgyBzHjHG…
G
With all those billions of dollars one could easily desalinate at the edge of th…
ytc_Ugz6HJQek…
G
I think we need a new AI bot to fix all the other AIs, then we will be back on t…
ytc_Ugy9EIXmf…
Comment
There are many ways to catch ai hallucinations. The way I use AI, I'm always testing for hallucinations regularly. It's just the way I use it.
It might hallucinate on the first prompt, but if it sounds off and you want to double check, it'll usually correct itself on the second prompt. And if you don't catch it on the second, it should become obvious by the fourth or fifth.
The more important it is, the easier it is to consult a second AI model. You can even arrange an agentic array of experts to find a consensus, but I think that's what already basically ChatGPT and Gemini do behind the scenes.
And that's how they have already been able to decrease their frequency of hallucinations.
I feel like the concern over hallucinations are people who simply do not know how to use Ai well.
The limits of AI are with the users. You get out what you put in. So if you're putting in slop, you get slop.
I'm not an expert on this though.
reddit
AI Moral Status
1765317918.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt6usbo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6njvp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nt6wlv2","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6qx0h","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6jk1j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]