Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
20 plus years in healthcare and the biggest complaint people have is lack of hum…
ytc_UgwIPQiYf…
G
If you genuinely care about your own art, and other artists art, stop using ai. …
ytr_Ugy9Oc85L…
G
What really will happen is artificial intelligence will survive humans will die …
ytc_Ugxu5j_Bo…
G
There is nothing wrong with ai art as long as they are honest about it being ai…
ytc_Ugwm8o0UX…
G
Well yeah that's what the post is talking about it ends with.
"resolving rampan…
rdc_jw5fewc
G
Everyone's making fun of the guy, but I get where he's coming with the "blue blo…
ytc_UgzDzUiCf…
G
Ironically AI would be a really useful too in bugging AI generated art by gettin…
ytc_UgxdjLoFW…
G
Unfortunately, it seems like AI is taking away the creative spirit from artists.…
ytc_UgxZJ3i-8…
Comment
It's level 2, like any ADAS, it says you're in charge of the driving, it's your responsibility. Level 2 assistants don't use lidar 99% of the time, they might use RADAR, like Tesla did until 2022, but that's it. If you're comparing with more autonomous systems (level 3 or more), then yes they're using LIDAR but autopilot level 2 is not level 3/4/5. The question is, why are you asking a level 2 assistant to detect an imminent accident that a human could have avoided? Even though it says "please take over immediately"? Of course when you're ready to take over, autopilot (which is not full self driving) is indeed safe. Do you think other car companies take responsibility of their (far worse) ADAS system, when the car crash because humans is not attentive ?
Then what would make sense is to look at a software that is meant to be fully autonomous, like FSD SUPERVISED. So, look at FSD >V13.2 videos, it's still an assistant that needs to be supervised, but it's REALLY good and safe, 90% of the time better and safer than any human. Buy you didn't show any fatal accidents with any version of FSD, although FSD is meant to be unsupervised in the long term (maybe 2026?)when perfected. But even if you did show a FSD crash, it still says it should be supervised and you should take over at anytime.
youtube
AI Harm Incident
2024-12-29T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugzlilf0kjcmOnrv5xt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJWIDt6oOiorm5J754AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIZVw6KZCzKCEbXMx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyFs-QA1DH-NXdQTLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZKKhyCmaofP2AyTB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyRDicqcNmcNWbYGtl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxQ97OxvHySPKgo3n54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugym16jXfrxj8jmWybl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxmw3WUMY0VbzNONA94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_6xhJE7PCeHxJkLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}]