Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The test isn't that checkbox itself. It is the way the mouse moves until it clic…
ytc_UgyTMjw4J…
G
I only take waymo in LA now. I haven't taken an uber since Waymo came out.…
ytr_UgxFO41VX…
G
The 100 TRILLION dollar question. AI+Robotic+Automation... In my humble opinion,…
ytc_UgwoUgDGB…
G
If all AI wasn't flood cleaning, maybe I'll feel a lot better start. Unfortunate…
ytc_UgynsozWU…
G
People need to see AI in a different light. Stop asking if it is going to repla…
ytc_UgzCTXn6t…
G
This is the problem when the press people need to talk to professionals instead …
ytc_UgwE3j9tb…
G
If he thinks AI makes him an artist, then the AI is the artist itself. After all…
ytc_UgxxYg-P6…
G
why shouldn't we smash the AI then? if it threatens everyone's livelihood, way o…
ytc_UgwWXvyG5…
Comment
I bitched out chatgpt and told it to stop being a yes man. It worked.
Then I got mad again later and told it to not interpret intentions. To only answer the question I asked directly and not add extra information. That worked well too.
It doesn't have a personality anymore and it's like a more robust Google search now.
reddit
AI Harm Incident
1750119881.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_f508xu5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_my5zxaw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_my7lthw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_my6g7bn","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_my67nk9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]