Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans are so stupid. Even though it is statistically true that Waymo is way saf…
ytc_UgzQszlgr…
G
Yes this is the idea we're going to destroy the economy in the entire world and …
ytc_UgzqL-oFO…
G
Ai art bros fail to understand that artists can still do traditional art, painti…
ytc_UgzXpKiip…
G
Nobody ever seems to even mention the fact that AI and our increasingly digital …
ytc_Ugznyj19k…
G
@brennanfee7458 that's not self driving you're still supervising it no one is s…
ytr_UgyZHxTOz…
G
How I read this: Elon wants to jump on the AI bandwagon and wants everyone wait …
ytc_Ugw8Vu02K…
G
There's no zombie apocalypse but there will be a robot invasion 😂 Good luck yo u…
ytc_Ugwo9gltJ…
G
imagine building a robot that is as intelligent to its creator, yet its mind ca…
ytc_UgxBaYy4u…
Comment
The fact that this report cannot get the basic terms right shows a lack of considered research, which is surprising for the WSJ. There is a critical difference between Autopilot and FSD Supervised that is ignored here. Under no circumstances is either more than a Level 2 system; it is an ADAS designed to assist, not replace, the driver.
By focusing only on failures and ignoring the well-documented instances where the system autonomously prevents crashes, this piece presents a skewed narrative. FSD Supervised is the most advanced ADAS on the market, but it is entirely up to the driver to provide oversight. Blaming the technology for accidents where the driver failed their legal and technical responsibility to supervise shows a fundamental misunderstanding of how Level 2 systems are designed to operate.
youtube
AI Harm Incident
2026-04-01T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzSJcfGnAXeQUWTZ9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwNP496-h11W56vbVp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnSaCuouDBKWcKn894AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyT3ku48ypFiLf2knR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwo9KNjZRM2KHUpD8l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxbwS0sXSAQJAhMI4t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwuDTWql7__gEC-WTx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyT3rC9oJ9hLZqWC1x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzgW8KjH0bvE07SbnV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyXD-W_KaXLw_Wo1qd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]