Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Never have I ever met a single artist who would rather generate ai slop. As an a…
ytr_UgxQB9ISi…
G
Biases are what come to mind when people think of AI as less than human because …
ytc_UgxaqldrH…
G
I don't know jack shit about art or AI art. but from my point of view the artist…
ytc_Ugx_6mOGg…
G
"if you create free labor"? Automation has been creeping into manufacturing for…
ytc_UgyxUvLYi…
G
And why is this a good thing? The guy has 140 000 likes on his ‘artwork’ and all…
ytc_UgyurbWf5…
G
OK I get that using AI to make art and uploading it is unethical (and I 100% agr…
ytc_UgyHXHpIY…
G
I’m a firm believer that at least half of these pro-AI “super intelligence is co…
rdc_nt7m2me
G
They only want to make it into a woman so when it kills everyone they can preten…
ytc_Ugwh8oOnG…
Comment
A drawback of Tesla's FSD, and autopilot, is that the only two options are system engaged and dis-engaged. If you have an impaired driver, who as in this case did comply with the autopilot prompts, then you have a car that is partially controlled by the impaired driver, and partially controlled by the software. If the driver, impaired or not, does not comply with the prompts, the system disengages the autopilot, leaving an impaired driver, a sleepy driver, an unconscious drivers in control (really?), which I guess is no different than someone who doesn't have a self driving feature. Pretty much all cars with autonomous driving will make these dangerous, and in some cases fatal mistakes, simply because the AI is not close to being capable to make appropriate decisions in all driving situations.
While the 2019 models had "autopilot", it was renamed Full Self Driving after that, and autopilot is essentially adaptive cruise control with lane holding capability. Basic autopilot, is adaptive cruise control, and advanced autopilot adds the ability to have the car stay in the current lane. Into another lane and autopilot disengages. FSD is the software that autonomously steers the car, changing lanes, slowing for traffic, and driving to the programmed destination, hopefully without problems. I have advanced autopilot, and when in lane keeping mode, it prompts me about every 5 minutes to apply slight pressure to the steering wheel, more often if it believes I am not paying attention. The driver in the accident was receiving essentially 3 prompts per minute, so they were alert enough to respond to the prompt but nothing more.
I have received trial periods for FSD, now called FSD (supervised), and it is capable enough on city streets in light to moderate traffic. In heavy traffic or on the freeway, not so much. I have to pay close attention because while the decisions are generally ok, sometimes they are not good, and there is no way to determine when that will happen. The video used the phrase that the autopilot was confused by the situation and so did not react quickly enough. I can see how that is possible, and far more frequent than it should be. Actually when confused, the reasonable course would be for it to slow down and pull over, but it appears that the programs response when confused is to simply disengage. The primary assumption behind FSD and advanced autopilot, is that the driver is actually capable of operating the vehicle, and is paying attention, bad assumption. Most people will be paying attention, but not all. My impression of FSD, was that it was like having a 16 year old with a brand new license driving your car. Scary.
youtube
AI Harm Incident
2025-01-21T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyAFQBeSTNdDFCNLCd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzFJUGkjhTnVZaO2cd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz8Kq0kkDafXWeuKAZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxyJw0lBpwlB8t0uM94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6BdvFC32STL9goAp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVwdsHIRRfPHIeXlh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwisFB6Iwt5bwYPyDh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFsr2csxlbD3r8UiN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgypHeJg7YS7YKruSSd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxh50gLsl7n_pjy-594AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]