Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fact is, AI researchers and grad students have absolutely every incentive to hol…
ytc_UgwfwnoP3…
G
Now the question is, what happens when AI gets a hold of this video and realizes…
ytc_UgzxKpVRg…
G
The end of the Teamsters and driverless trucks can't come soon enough!!!! MPU mu…
ytc_Ugxu3_iC9…
G
When I was young I worked for a big company that was building boilers all over t…
ytc_UgwR8n1RS…
G
I figured people would be ok with AI art because it makes art more inclusive bec…
ytc_UgxjaohrW…
G
Can’t wait till actors are out of a job because AI can be put in a movie instead…
ytc_UgzJ-6xYs…
G
Exactly, it’s like we’re being told AI is inevitable instead of being asked if w…
ytr_UgzV4SHn_…
G
@Marksman3434
Once all human labor is automated, people will still need a way t…
ytr_UgyQzrWZp…
Comment
I haven’t seen anyone else say this really but I’m almost certain that it’s switching to this coddling mode when it detects someone might be at risk of self harm. I distinctly remember OpenAI announcing that it’s models would automatically switch in these scenarios.
reddit
AI Harm Incident
1773325339.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_oa1l5we","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_oa1dscr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"},{"id":"rdc_ofno6sn","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"rdc_imn0u6c","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_immxlnu","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]