Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I agree. Self driving trucks still need an operator or driving going along with…
ytc_UgzNkkODE…
G
As a dev who also tries to do a bit of art by myself for my own games, I cant ev…
ytc_UgydneOjI…
G
The answer is easy...it is those who agree with the coming communist world syste…
ytc_UgzcI1mXb…
G
Oddly, Meta's been releasing tons of open source models that have performed quit…
rdc_kojjnws
G
As I've observed, if you ask ChatGPT a yes/no question, it's likely to answer ye…
ytc_UgzH1T3KQ…
G
Probably doing AI noise reduction to better compress the videos to save on stora…
ytc_Ugznr6QgM…
G
Well, it would be the same as a self driven ambulance. Which is plausible, espec…
ytr_UgyRdN3R6…
G
I'm not a artist but yea AI art is boring it can't take my attention. But an art…
ytc_Ugyx-cuPx…
Comment
As of early 2026, the most current data indicates significant safety concerns regarding Tesla's "Full Self-Driving" (FSD) system, particularly when it fails to detect reduced-visibility conditions or when drivers are not actively supervising the system.
Key 2025–2026 Safety Data and Investigations
Expanded NHTSA Engineering Analysis (March 2026): The National Highway Traffic Safety Administration (NHTSA) escalated its investigation into roughly 3.2 million Tesla vehicles (2016–2026 models). This "Engineering Analysis" covers FSD in low-visibility conditions (fog, glare, dust).
Documented Incidents: NHTSA has identified at least 13 crashes (as of March 2026) in which FSD was engaged, including one fatal pedestrian accident in November 2023 and other incidents involving fixed objects and first responders.
System Misuse & Failures: Investigations indicate that in many crashes, the system failed to detect common roadway obstacles or did not warn drivers in time to prevent collisions. A 2024 investigation concluded the system lacked standard protections, leading to incidents where drivers misused the system in ways that "should have been foreseen" by Tesla.
Unmonitored Risks: While Tesla continues to promote FSD as requiring "active driver supervision," a report in late 2025 suggested that recent software builds might allow drivers to take their eyes off the road for over 20 seconds, increasing risks if the system is not actively monitored.
Delayed Reporting Investigations: In August 2025, federal regulators began investigating why Tesla apparently has not been reporting crashes promptly to the agency.
Key Safety Concerns
Camera-Only Approach: Some experts and reports argue that relying solely on cameras without lidar or radar is insufficient for fully autonomous driving, leading to failures in detecting obstacles in poor visibility.
Failure to Recognize Traffic Signals: Reports filed with regulators have highlighted instances where FSD-equipped vehicles ran red lights or incorrectly stopped at green lights.
Sudden Disengagements: The system has been known to abruptly hand control back to the driver.
youtube
2026-04-15T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwjHhB7Arb0tweKGQl4AaABAg.AVgRDFOjjPUAVo0Auke0xW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwjHhB7Arb0tweKGQl4AaABAg.AVgRDFOjjPUAVo0l78AiH0","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyIC1-omp9k8IVJSeh4AaABAg.AVaqbjQOGiSAVb-eAN2TPG","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwoqrQfSsWT0rHHKfB4AaABAg.AVak9rxylr-AVb-lO0gvy7","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugzt0EYgdL_cnVCZn8F4AaABAg.AVZ8fr_FmbgAVZV6ArSMw3","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugytv2UbirMNcU-q7BN4AaABAg.AVZ2vkrG-MqAVZUbf7rp5_","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz1wdSecgLYYCEG6DF4AaABAg.AVYa3dJi6q1AVYiyxkjRPQ","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz1wdSecgLYYCEG6DF4AaABAg.AVYa3dJi6q1AVZ-HpA88cr","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_UgwXxrgTbJyGtayrf9x4AaABAg.AVYISpvLIa4AVZVac5PkZd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugwy58bG9BoBnO11H-R4AaABAg.AVYFPAKgUakAVZSmSVC8DJ","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]