Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How is AI ultimately sustainable with the vast amounts of water and rare earth m…
ytc_UgyLqQEcp…
G
I know why not, because it could mean human extinction very easily, and the like…
ytr_Ugw5l9Ns_…
G
most AI startups are just $$ harvesters. from years of tax pays and wealth creat…
ytc_UgxkfHVts…
G
I am by no means an expert on AI but I honestly believe that in 20 years max the…
ytc_Ugz3P-0hF…
G
Infinite is not a number - It is a set of numbers. Your question was in error. I…
ytc_Ugx-TzLvC…
G
Some folks dont really seem to understand capitalism/automation as well as how m…
ytc_Ugw_gdrzL…
G
If you’re genuinely concerned about where AI is heading but don’t know where to …
ytc_UgyzZ_Hkm…
G
i would be so annoyed if someone was interrogating me this way, shaking every wo…
ytc_UgzcHuTxf…
Comment
Great story! I would've liked more statistics, especially number of deaths and crashes in proportion to active Teslas with auto pilot vs. proportion deaths / crashes of some other comparable manufacturers. Because this is what all pro autonomous vehicle people is screaming: people still make more mistakes! Which of course on its own is not a sound argument: if a random selection of cars of brand A had brakes that could just, by construction, suddenly completely stop working for no reason, and you could prove that, they would not be allowed on the streets. Of course issuing a warning would not suffice. This is where you would build the case against Tesla: They are obviously promoting a technology which we are encouraged to hand over our safety to, while still claiming in the details that it isn't ready for that. Calling something that is SAE 2 'auto pilot' and promoting it this hard by constantly claiming it will save lives etc is straight up dangerous. I mean what is the point if you should always keep your hands on the wheel and feet on the pedals and be prepared at all times? Of course people will act the way the technology permits them to, not what Tesla writes in the manual. They know this, and this is why I actually think they will have to pay up enormous sums pretty soon. There's going to be whole law firms working on nothing but these kinds of cases very soon.
youtube
AI Harm Incident
2024-12-23T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugylawk4Wwo2HaZN6Gd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygC_qYcmGjSMiJmA94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3VkzIh25fPT5W7Gh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTcfQFE4lU3kA-jFl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgygOgEYmoGpSAABP_F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxo0KySK5XL5aODMIp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugym8uQepetHZvqvByt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzK4hC1-MQssaF_xpV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzftlsHe8yLGgZJjS14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCPvU_oF2h9AS6a_d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]