Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am not afraid of AI because everyday I am learning tools those are using AI.…
ytc_UgzF9wCPC…
G
Everybody talks about that “AI is an insult to life” quote, but that last thing …
ytc_UgzCaBsLW…
G
Of course I don't think we should trust AI who's programming it human being with…
ytc_Ugw6MLZxJ…
G
I suck at drawing and modelling but I've willed myself into doing it anyway Ai i…
ytc_UgzLnATJo…
G
Bruno Sardine dude self driving cars are not that bad, you can sleep while going…
ytr_UgyclY637…
G
This was a great video! It inspired me to sign up for ChatGPT and start using it…
ytc_UgwZqMpXC…
G
Self driving cars don't even operate 100% effectively so a big commercial truck …
ytr_UgwrqW3-G…
G
AI is also prevelent in the language learning community, people having a convers…
ytc_UgzxSixg5…
Comment
@Novaruu I am by no means saying a drunk, intoxicated, or distracted driver is safe, and they absolutely shouldn't be on the road, but I wouldn't consider Tesla "self driving" to be much better by itself. My issue is that Tesla self driving, as advanced as it is, has numerous issues, and by itself is nowhere near as reliable or safe as a "good" driver. To put it another way,
It would be dumb for me to say "Oh, my grandmother can't see well, she's not super attentive, she makes lots of mistakes, she's just not a safe driver anymore, and she shouldn't be on the road by herself...but if you put her in a car with me in a co-driver's seat, complete with duplicated controls so I can take over at a moment's notice, *she's an amazing driver!* "
She isn't suddenly a safe or good driver, you just have the safety net of a "good" driver being able to take over the second a mistake is made...so why is that suddenly a good argument when we're talking about self driving cars? If it requires a competent human behind the wheel in order to be safe, it's not safe, you've just given it a safety net. And when that safety net screws up, whether it be them being drunk, intoxicated, distracted, whatever, suddenly you have an unreliable self driving car with no safety net to protect itself (or others) from dumb mistakes or glitches. Yeah, it *might* get there without issue, or it *might* only have a minor hiccup, that although a stupid mistake, isn't necessarily "unsafe"...but it *might* also not be able to see through bad weather because of the cameras, and it slams right into a light pole, sign, pedestrian, or car. It also *might* make a mistake like what's discussed in the video, where it just doesn't stop despite the OBVIOUS signs saying it needs to.
Now, is this inherently a bad thing? No. There's supposed to be a "good" driver in the car anyways, who is paying attention, so it *shouldn't* be able to make such colossal errors. We'll take the data we get from it and improve on the next version. It IS an issue though, when you sell it as "self driving" or "auto pilot", marketing it as a perfectly safe self driving car that is so good you don't even need to watch the road. When you do that, you end up with a whole lot of good, average, and bad drivers, who buy your car expecting that they won't have to pay attention as much as they normally would when driving.
Bleh, apologies for the long rant. My point is, no, I don't trust bad drivers. Bad drivers are a hazard and shouldn't be allowed on the road. BUT, the only difference between a bad driver and Tesla "self driving", imo, is that in MOST cases, "self driving" comes with the safety net of a good or average driver. So, what happens when you combine a bad driver with a Tesla? You get...a bad driver, and an accident waiting to happen.
youtube
AI Harm Incident
2025-08-17T23:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugxgzvu-FwOQu9bJ0vd4AaABAg.ALsBNNr-MjMALt3EEo1qKr","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxgzvu-FwOQu9bJ0vd4AaABAg.ALsBNNr-MjMALtOd75r8fq","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugxgzvu-FwOQu9bJ0vd4AaABAg.ALsBNNr-MjMALtU4HxULzX","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyjDiiZIux5i0uN85l4AaABAg.ALrr3NOGg1pALrrwqnJySG","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugz1uaIjR8b2_oZMcVZ4AaABAg.ALrnOPfsKpNALwv1a-70Wc","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz1uaIjR8b2_oZMcVZ4AaABAg.ALrnOPfsKpNALyS028qutc","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyGR7tMS8aiGVLQTA14AaABAg.ALrljpcXbqAALroRj3q9pO","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwUTrZk5qHaYQCZrZ14AaABAg.ALrhmI2U_5tALsVUm9jHH7","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytr_UgwFGlnCrwlzIISi5A54AaABAg.ALrhkfZ0fL6ALsgK10Zdij","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyCN_1HFgMv51CNviV4AaABAg.ALrhcD4wR9dALsNppsNyYE","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]