Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are right, but it's crazy how fast we become spoiled. If I only had any brok…
rdc_mrtgd8d
G
AI"s problem is the insane start up costs to train a model, followed by the insa…
rdc_nk9b528
G
Will UBI be bottom tier, or middle? I don't trust the crap myself, they rig the …
ytc_Ugw2ak0wb…
G
IIt's hilarious to hear people who never used AI to make art, try to tell you ho…
ytc_Ugx5yl8dR…
G
I really hope u don't mean all AI since AI can help a lot with finding cures or …
ytc_Ugy-Wlu1S…
G
@lasermouthfulthe mentally ill person believes the piece of wood is talking to …
ytr_Ugwqzt58J…
G
It strikes me that AI is a Golem. Understands instructions and can do complex ta…
ytc_UgwCbas9g…
G
Sam Altam is way to optimistic about the impact of Ai on jobs. The Schumpeter's …
ytc_UgwzLMvfc…
Comment
This isn’t how continuous development works, you think a company like OpenAI wouldn’t have savepoints or even save their training data in a different way?
These are valid points about the quality yes, just not buying the other part.
reddit
AI Harm Incident
1746997120.0
♥ 411
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mrucriu","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_mru3i7r","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_mrt5k9m","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_mrte0w4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mrtf7tz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]