Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I simply love the way you are drawing and bring your point out (sorry, I am germ…
ytc_UgxRgdAf4…
G
AI should reduce the world population. 8 billion persons is far too many people …
ytc_UgxxXl3_j…
G
I can't wait for when a kid develops AI and releases it into the world.…
ytc_UgzWfeemp…
G
Nightmare scenario #1 - AI and robots taking over without their knowledge. We’re…
ytc_UgxK9SFjK…
G
Silly silly commenter, Ai is only good because it steals other people's art, yea…
ytr_Ugx5jYHds…
G
We'd better have a plan for UBI for all and abundant nuclear power before we go …
ytc_UgxzVpDLy…
G
the only real way to avoid ai art at this point is to all do traditional art 😭 e…
ytc_UgynJXVgK…
G
Agreed. Let's outlaw driving, or only allow automated vehicles.
We need you to…
rdc_dfet8od
Comment
If streetlights were red humans could see better at night. Near IR high beams could help the AI see better. The AI could have simply not taken an action because the accident was unavoidable. This AI was originally designed for assisted driving and the company didn't want their AI harming or killing their their customers to avoid harming others. That is not making a moral choice. It is possible they didn't teach it about minimizing impact speeds in an unavoidable accident.
The braking distance was significantly larger than the distance to the woman, but it could have dropped to 30 MPH.
youtube
AI Harm Incident
2018-03-22T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgyxbAnQx_yRT1th3dt4AaABAg.8e531dFCSDQ8e5e8A3MlcQ","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz-q4kGhI-Sqz0mZW94AaABAg.8e52iX72h2u8e5iU7Ob3MD","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz-q4kGhI-Sqz0mZW94AaABAg.8e52iX72h2u8e686wTzxW_","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytr_UgyhyW1vh2pPRsAP-lF4AaABAg.8e52h0SffYg8e5eRPdoyGO","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyrmzWNxxyPTZHJNUd4AaABAg.8e51cLwTEZJ8e53m5OS9wn","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw-7PmKu_H1D8Zf_h14AaABAg.AI26oW-G43TAI3u9ildqJ2","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugx8JziXroYMFO7mFTt4AaABAg.AI0ty6JRpNmAI1SPVzC6XC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugzlyo78ZfbJq0kQntt4AaABAg.ADE5eBQGClMARNEFtwTA4z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxJitLN-TUBXv4lRgZ4AaABAg.9zrKUOTXqbm9zrKhiIuQhp","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzqN3AVT00-Se0AzJZ4AaABAg.9FIX7rgDnQTABVr13xCYd_","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]