Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the government should have near-total control over AI, much like it does…
ytc_Ugx3DVa_G…
G
I'd gladly abuse prompting in all available free ai just to generate as much cos…
ytc_UgzxCruBR…
G
Thank you. I just got on to instagram this year to post my art and the first thi…
ytc_UgyQ75bnu…
G
7:08 He makes a VERY good point here. The corporate policies determining how the…
ytc_UgyCQehTJ…
G
If A.I. art is theft, then every masterpiece is just a remix of someone else's p…
ytr_UgySqiep8…
G
@alfonzocamero I apologize if my previous comment came across the wrong way. I…
ytr_UgyapF2aB…
G
AI offers an arms price point that creates the possibility of taking and holding…
ytc_UgxfJdxh8…
G
This isn't going to give anyone pause, as it is a nothing burger. The ruling onl…
rdc_jwvs17r
Comment
First and foremost my sympathies and condolences to Nyel Benovitas' family - such a tragic accident which shouldn't have happened in the first place. The 2019 incident was on Enhanced Autopilot, not FSD - this means Human Error as with most accidents which happen with Teslas. The driver knowingly chose to operate the vehicle in a manner that Enhanced Autopilot could not - it can only change lanes on the freeway; it doesn't have the capabilities to make a turn. It's quite sickening that instead of taking accountability, he convinced himself and her family that this was Tesla's "fault". To enable FSD, there's a disclaimer explaining to the driver their attention must be on the road, with hands close to the wheel to take over at any given moment.
I have been beta testing this software since its inception. Does it make mistakes, yes; so does human. The point is this: FSD's mission is to be safer than human, not perfect. Fatalities will happen no matter what, the point is to reduce it. If FSD is every 5 million miles driven and only 1,000 incident and Humans are every 5 million miles and 50,000 incidents, then in this case - FSD has done it's job: It has saved lives.
This is the same argument as a knife. Are we going to completely ban all knifes because people die from them? No. When a tool, a tech, or anything else's benefit outweigh the negative - in a society - we choose to still have the knife because without it would be unthinkable. Full-Self Driving will eventually get to a point where customers making a vehicle purchase will want full-self driving. If it doesn't, they will walk away. Simple.
No mention of which software the current demonstration vehicle is driving. At the moment v13.2.9 is the latest, but v14.2.x should be rolling out soon which will mitigate most errors and issues.
23:26 - An avid FSD user can easily tell this vehicle is using Autopilot, NOT FSD - Autopilot is NOT designed to make turns. As always, an unfair and biased report by Legacy Media. Well done.
youtube
AI Harm Incident
2025-10-19T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw5fRClR-ryDDOhnL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"sympathy"},
{"id":"ytc_UgwUWH9x11OS_amyFJd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy3qwMWNhqbH1iXRI94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxBeQaSsGYNjVwdeLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGeW7t4y54A3UzRpR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugw7TnXIfPG_NBlTKEJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxzhZulEsTdSChaZk94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy79rTOnsI1YeIy0hx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxDAOw6lIoDYI0JiaJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx8tVUvR1oXAb5Hz-14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]