Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol too little support from the government.
Japan is a shitty country on many re…
rdc_gspb42i
G
Alternate title: Alex argues with ChatGPT about roundabout semantics for 16:26 m…
ytc_UgyDUhc8l…
G
this was a comforting take on a very difficult and scary subject. Thank you. We …
ytc_UgwOXTR9N…
G
The human don't deserve this world... And ai will be more dangerous coz they hav…
ytc_Ugy7VSHou…
G
Going to be funny in 2030 when you can go to jail for hurting a robots fake feel…
ytc_UgzS35J6L…
G
DEStroy her it's still just 2019 . There are literally movies on why we should …
ytc_UgyLG2Y0a…
G
Teenagers don’t listen to parents, teachers.... you expect them to become self m…
ytc_Ugxig0sUp…
G
This is a truly profound and complex set of observations and questions. Thank yo…
ytc_Ugzwdc09z…
Comment
It's my impression that people *believe* they can take a nap while these are on 'autopilot'. There was a major incident here in Phoenix, where Waymo was testing and hit and killed a person on a bicycle (the bicyclist was not in the cross walk, and it was a bit dark out) the person who was supposed to be monitoring the vehicle behavior was negligent and not performing her job as the 'human' to prevent it. If she were alert, i.e. doing her job, the outcome probably would have been injury, not death. Could that be the case with these two tragic stories?
youtube
AI Harm Incident
2022-09-03T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw7-HYtNcHnzRvs3R94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzo2oNBN_ODlk4no9x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwQEKxn-LOhiRGAUSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwAfeNphLL4V8Nluql4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwc0uu-jsgVHswe7Wx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwbsSTAmGuk6L50rpZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy4SgqPfyE_aaq6KHh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYA_EBC1DYHTFejOF4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugycy61gUrhGP2XIY3l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzJlwQK-98RxCDFN9d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]