Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is one of the videos on this channel that comes across overly militant and …
ytc_Ugwb-0oSf…
G
Thank you for speaking about this Charlie. I am an actual artist and it's very u…
ytc_Ugy04bz4E…
G
Climate change is serious, but it is not an existential risk, that is it will no…
ytr_Ugxo2Xlcv…
G
also AI is not "just a Tool" at least not used like one.. A tool doesnt do your …
ytc_UgzH6bh1A…
G
@TheMerchentOfDeath yeah i agree after using claude for like 2 days at the most,…
ytr_UgzK7LGiS…
G
Pandora is out of the box pal. The negative use of AI will be paramount. This …
ytc_UgzU1vxo0…
G
I know that fanfics aren't the pinnacle of high art, but if I could write my 80k…
ytc_Ugzp6rLsl…
G
The thing next to your prompt when you ask the ai to do something I know is to r…
ytc_UgzlulJ-f…
Comment
The only relevant question is: "are self-driving vehicles killing more than other vehicles?"
Of course there will be accident where the machine is responsible. The question is not would a human have avoided it, because a human would have had a different accident anyway. Humans are known for drinking/driving too fast/not respecting safe distance/doing reckless stuff because its "fun"/etc., on the other hand, machines have bugs. You can't compare accidents, but we can compare statistics.
If self-driving cars kill 10 times less than human drivers, then we should always use self-driving car.
Humans are so bad at driving that I suspect that self-driving car are already way better than us.
youtube
AI Harm Incident
2018-03-26T09:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwRetTsi4i0BqNRF114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9Q4IOXYexIL_Uknd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxcAqQObdXaeSzh81B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwX_g2oZkBEcBYpK1x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwo3xZi5Qa15kzDWnR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy1QL1_yfOFfFRBVvh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyTduAF4Rg9I0AUbUx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsDN3E4w5XR8_3azJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwm7oRC8jx_I495dNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwr0t6NT-Q7TyAiMbd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]