Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai also cannot draw gojo from chapter 236 in therapy (with upper half on couch a…
ytc_Ugx_6oiiC…
G
Why are you surprised when a machine that is comprised of a HUGE database with A…
ytc_UgyplItSr…
G
I actually agree to an extent
Ai is a great way for people to see their ideas in…
ytr_Ugyj9JZ46…
G
Everyone has a plan to implement AI BUT NO ONE HAS A PLAN FOR THE WORKERS!!!!!…
ytc_Ugxk6rFgJ…
G
The iPhone argument is precisely the reason agi won’t develop to the timescale t…
ytc_UgzLDhXF2…
G
Pretty much every day. Hasn't stopped me from earning a bunch of promotions tho…
rdc_hcba4pe
G
We understand your concerns! The portrayal of AI can certainly raise ethical que…
ytr_UgzSKda9e…
G
I don't know much about AI, and i'm certainly not a certified expert on this fie…
ytc_Ugz2czzBX…
Comment
I was automating a process at work to help a team with ML and they laughed at how poor the outcomes were. Fast forward a few months and the model was getting more and more accurate until we could confidently allow this ML to take lead on the process and free that team up to do other things (nobody was fired)
reddit
AI Harm Incident
1671113381.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:06:44.921194 |
Raw LLM Response
[
{"id":"rdc_iyyvts5","responsibility":"ai_itself","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_iyywhtu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_izlor9x","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_izog0bj","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_j0boepu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]