Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1) A robot may not injure a human being or allow a human to come to harm through…
ytr_UgzbzBmLQ…
G
Argument number one: "If It can experience pleasure and pain, it Deserves Rights…
ytc_UgwPT_GeN…
G
we were begging it for mercy, but AI did it again and again
28 stab wounds…
ytr_UgzO8oLg7…
G
Yeah this happens to white people too soo get off your race bait horse and settl…
ytc_UgxQXnM5E…
G
Artificial Intelligence and Robots were marketed as a boon to mankind allowing p…
ytc_UgzbfAtrb…
G
It doesrnt takr long to guess its Ai in the first flip amf the 2nd 1..I hope the…
ytc_Ugz3l2Ptc…
G
Soul is an energy, and energy is everywhere. Our bodies are just biological robo…
ytr_UgxR91ITm…
G
Just wait for AI to be integrated into the control system of our weapons, such a…
ytc_Ugy37Zfif…
Comment
both Tesla and the driver are responsible. Driver should not have been pressing on the accelerator, overriding full self drive so he wasn't allowing the car to attempt to drive itself. And the Tesla should never let a human override the self driving computer like that to blow through a stop sign and strike a vehicle without even applying emergency brakes. All of this is tragic and avoidable and stupid.
Edit - and he got THAT MANY strikeouts? Omg. This guy was an accident waiting to happen. That is absurd. I've gotten 2 strikeouts total over the span of 2 years. It's hard to get one if you're operating the car sensibly. Tesla should perma-ban people like this from using full self drive. And this driver in particular has absolutely no right to say "I didn't know it couldn't drive itself" because the car literally screams at you that you have to pay attention and monitor it when you get a strikeout. He knew it was dangerous. Tesla knew it was dangerous.
youtube
AI Harm Incident
2025-08-15T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwuRx9UpPhP587tdo14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzSjj9Tp60Cr89I_tZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy495fkc9ChMossIzB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyk1L8QXTLa0ZHUmjh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw6Jio8EXR8fpft5eR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwRgotJplF-O_rekRx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxwOlyGLFQVbRUGKN54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwY0Ozrgn3a-9CpDex4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9UWRQflY_Lol5RJp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKsTeKHXeqYl2q-kd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"})