Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only the doctor won't be able to drive an Uber because that too will be automate…
rdc_g69b64w
G
If they replace too many people with AI, then there won't be anybody that can af…
ytc_UgyVCFGQ7…
G
From the point of a reader, I would be outraged if I read blogs done by an AI an…
ytc_UgxYkE7Es…
G
No, but people tried regulating social media and failed miserably. I personally …
ytc_Ugzmqd_aa…
G
FSD isn't FSD. Full (supervised) Self Driving. FSSD. But, it isn't Full or Se…
ytc_UgyhByynO…
G
14:43 Is it safe ... to give ai access to weapons... bruh if it was safe, my gue…
ytc_Ugzt4WBjV…
G
No Azure for apartheid ! Microsoft’s A.I is being used by Israel for target bomb…
ytc_UgxzQEIvO…
G
Can be crazy, but... we are here (earth) for some reason... maybe the mission wa…
ytc_UgyrPTx31…
Comment
Pump those AI breaks... they are simply making a philosophical statement, they aren't declaring googles car safe for the road just yet. "“But the burden remains on self-driving car manufacturers to prove that their vehicles meet rigorous federal safety standards".http://blog.caranddriver.com/nhtsa-sides-with-google-officially-declares-autonomous-car-software-a-driver-sorta/
reddit
AI Harm Incident
1455342258.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_czy5d00","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_czy82ur","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"rdc_oi3d0s8","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"rdc_oi25lks","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_efildd8","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]