Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
22:28 is incorrect because you can indeed get an llm to believe in session that …
ytc_Ugzp80nGx…
G
What about the ability of SI to manipulate and influence the way that the human …
ytc_UgzNgO0hi…
G
Children under the age of 25 do not have fully developed brains. They cannot han…
ytc_Ugz-wTu1Y…
G
If the "Controllers" of AI let IT TAKE OVER - there Will Be a total world collap…
ytc_Ugyl6nNZF…
G
In my country, people under 22 get to ride buses for free. People under 26 get f…
ytc_UgwYGON0z…
G
The concept of self-steering passenger and freight vehicles pre-dates the car an…
ytr_UgwMk2TJq…
G
It makes no sense fighting a robot who already knows your next move and cannot f…
ytc_UgzZBjhCu…
G
Bruh thank you, I especially hate how they cited elon musk. Literally what does…
ytr_Ugyj42kQg…
Comment
The problem is that AI confidently gives the answer it seems to be most satisfactory to the user, not necessarily the correct answer and everyone gives AI unlimited latitude to make mistakes because we keep getting told that it is or will eventually be infallible.
reddit
AI Harm Incident
1773368541.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oa5ltcg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_oa7wdn7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_oa7n6n9","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_oa4qmfx","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_oa4w45o","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]