Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The issue with FSD is not with Technology. It is the overselling of it. You cann…
ytc_UgyfqxdC6…
G
This makes me hate humans 😭 why do people love AI so much I don't understand. Gr…
ytc_UgzYVYrd6…
G
AI as is, seems so much more pleasant AND interesting to talk to than the vast b…
ytc_UgwsCo86a…
G
The whole point is being missed. Think in comparative terms. The industrial re…
ytc_Ugw_02p3z…
G
Call it pessimistic, but one of the worst things America did to its people, is c…
ytc_Ugx-faUtT…
G
I can’t draw, and I don’t have the will to keep going until I can draw (I’ll loo…
ytc_UgxIhYJr1…
G
No it isn't. People who study these things see that AI constructed most of the n…
ytr_UgwiA0OGp…
G
I hope more companies use AI, they mistreat people all the time and want them to…
ytc_Ugw_PicKz…
Comment
The problem isn’t Tesla’s technology. The issue is that there isn’t enough training for the drivers. Too many people have become overly reliant on these advanced systems, forgetting that they’re meant to assist, not replace, human attention. It was obvious that stop sign was coming… the driver honestly wasn’t paying attention. Every stop sign comes with a warning ‼️ and especially that one.
Even though Tesla’s autopilot and FSD (Full Self-Driving) software have proven capable of handling complex maneuvers — like slowing down and wrapping around corners — drivers still have a responsibility to stay engaged. That means intentionally monitoring the road, keeping hands ready, and making judgment calls always instead of when necessary. Unintentionally playing Russian Roulette if your careless seek thrills that could lead to kills.
At the end of the day, everyone behind the wheel is accountable for safety. Elon and Tesla should continue to be fully transparent about the system’s limits and ongoing improvements, so the public understands the balance between innovation and responsibility.
youtube
AI Harm Incident
2025-11-05T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzqybtvfmINEmu54qZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzSrLBlGqwGfgKASuF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydmFaMtcoyynJjrel4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwRY-BE-sbPhff70rx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgziK_7vEi41F7qefSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwtLP74WuQDIUCTX3Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUh-asrzLoZx9t7s14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMkE0cxeVGGZS10Xd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxYXRnBLRlJuF2ilB14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwPRKErzB2PeOgiY2h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]