Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Shit. This is very scary - thanks for the typically pithy & concise video. As a …
ytc_UgyT5j9z_…
G
Let’s be honest. You think it’s “bad and obviously AI” only because they told yo…
ytc_UgzpBQ9ez…
G
Pretty much👽🖖🏽& in the meantime, the other aliens are using us to create hybrids…
ytr_UgxQb_RM-…
G
I wouldn't be surprised if the elite uses AI to generate a mirage that America i…
rdc_nd7kq9i
G
It sounds like you might have a different perspective on Sophia! She's designed …
ytr_Ugy5dKNeL…
G
@Misaka-gt5yj Sam Altman holds 0 equity in Open AI. What exactly do you think a …
ytr_UgxEmg4oy…
G
Bingo. This SHOULD spark a rise for UBI calls and the like to compensate. We nee…
ytr_UgzL-20S6…
G
AI isn't taking over jobs, it's companies firing woke "boss chicks" who don't kn…
ytc_UgzvcZXuo…
Comment
The main issue is that road vehicles are essentially machines that could cause serious injury or death in case of a system malfunction (as governed by automotive functional safety standard ISO 26262). This applies to many (potentially millions of) machines on the road that potentially share the exact same weaknesses and pose unacceptably high risk to injuries / deaths on the road without prior warning. Confer the airbag issue which "exploded" many years ago and led to the downfall of the producer.
Humans are far from perfect either but there are reasonable (and accepted) checks and balances in place for us. Ultimately, an individual can be convicted of a criminal driving offence. Especially, the potential of many humans suddenly making the same catastrophic misstake while driving after an over the air software update of our brains, is zero (in the near future at least). Now consider who is liable for an accident caused by a self-driving car that's driven in e.g. Germany but with its parent company in Texas. What happens if it kills someone on the road through negligence. How do you nail down the one responsible for this? We only need to look at the Boeing 737 MAX disasters to see how difficult it is to nail down the man at the top as ultimately responsible. There's always someone down the ladder who has to take the fall.
Having said this, I do believe that self-driving cars can become better than humans on the road, in most cases. We've just got a long way to go until we're there..
youtube
AI Harm Incident
2025-11-06T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwSwaj4I_C-18GMW9x4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwi-KD_257Vx_4FbuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMNP1oZXxQdv2tL7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyab4FUYVcYrndzH1F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxD__i3FhXv8cP8-g54AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzNTS3d39rQZfdQvoN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxxesh8DhwbmBQUzgB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrWx29PMIll388R6p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxmac_5Ve8-1-lBRiB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxxznVrOBwerobdTz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]