Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Start being friendly to AI, i'm sure they will remember any disgression you may …
ytc_UgxN8vbqG…
G
Basically an ai bot might not know what loyalty is so might not be predictable i…
ytc_UgxOK-HNf…
G
Just wait until we have AI automatically incorporating data breaches to blackmai…
ytc_UgzYGLttC…
G
LMAOOOOOO Ya'll can't be serious?! Delete your channel cuz you don't know shit a…
ytc_UgxnCtFGG…
G
I think a solid argument for regulation is false advertising. You can’t market m…
ytc_UgyTfU0ER…
G
I think most of us know in our gut that self driving cars, less trucks does not …
ytc_Ugy7CC7Ap…
G
People in India will be controlling those tesla robots and people will think it'…
ytc_UgwSQKLnH…
G
[After the next system update]
Guy: hey Ai, show me nothing
Ai: *fucking shuts …
ytc_UgyVXqheJ…
Comment
@KsazDFW I actually looked up the findings, the report itself after the initial investigation. I don't remember the wording now, but what it actually seemed to be saying was that the car stopped controlling in some cases near impact. This would make sense if the software realizes that it has no good choices and would thus REQUIRE the human driver to take over. What it does NOT imply is that Tesla is trying to avoid responsibility, since there is obviously a data record that would not absolve them of the car driving recklessly until one second or less before a collision.
There are naturally going to be accidents, and some of them could be avoided if the AI could be made "perfect." That is of course impossible, but that doesn't change the fact that there are over 5 million miles of "autopilot" driving, on average, between accidents, whereas for all drivers the average mileage between accidents is less than 500k. Tesla's AI is safer, period. And always improving.
youtube
AI Harm Incident
2022-09-27T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugx0v9_oun-TqDb_0mZ4AaABAg.9figAgQOfkz9gUHJkG63DI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgySIcK9ZqEB5BDABx14AaABAg.9fgvU2SihCe9gJt8KrubWI","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgxEfJtmETgub6C6jAR4AaABAg.9fgXdjFqUDY9fhUNDRrbI2","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzRiNdrKF-yX7lI5Nd4AaABAg.9fgUKTslEAT9fhkO-hK8Gm","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytr_UgytqgQTmw0Ybxc1fIJ4AaABAg.9fgFC97_nDn9foSsCbOSQz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxfUYyk4gnRyHadMnd4AaABAg.9fgBu2LM56c9fguwmqvc6S","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgxfUYyk4gnRyHadMnd4AaABAg.9fgBu2LM56c9fjGPDzSbSi","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxfUYyk4gnRyHadMnd4AaABAg.9fgBu2LM56c9flmnUOnnkn","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxfUYyk4gnRyHadMnd4AaABAg.9fgBu2LM56c9flnA4EJuQX","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxEf-7CTmm69HqPRBJ4AaABAg.9ffSv_5EmjC9ffbPfiLBzS","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}
]