Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
wouldnt we inevitably run out of data to train AI on? i mean 5-20 percent has a…
ytc_UgxRZF2vO…
G
We want the Jetsons. AI and robots assist, not take over. And I don't recall the…
ytc_UgwdRr3B2…
G
Smoking gun ... AI is still not good enough to replace an actual human at any of…
ytc_Ugyc4oQrg…
G
Ais are not to okay! They mist be destroyed. Ai Robots should never replace huam…
ytc_Ugzh3lPfz…
G
Copyrighting something that was created using "AI" trained on millions of images…
ytc_UgzMF0bKO…
G
Humans must agree to limit the architecture of Agentic AI systems to those with …
ytc_Ugyfv3_yc…
G
There's two things that I've learned since AI: in general, people don't know how…
ytc_Ugw-RNJE6…
G
"Autonomy over your point of view, and yourself as a human person." Speilburg s…
ytc_Ugz9zZ3E0…
Comment
What do people mean when they say hallucinations or possible hallucinations when referring to an answer chatGPT gives ?
reddit
AI Responsibility
1706955681.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kopy14y","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_koqzl47","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_kor1o3c","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_koqrka7","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_kor1vaf","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]