Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My hunch is that none of the AIs are conscious yet, but I don't work in tech and…
ytr_UgzAoHQOm…
G
AI is perfect for the busy work and the mass filing but for the foreseeable futu…
ytc_UgzO40E0n…
G
AI photos and video needs a "watermark" emblazened across it. If produced withou…
ytc_Ugw6TVslZ…
G
Tax all companies that replace people with AI the equivalent as though there is …
ytc_UgzwHxOmN…
G
I've always thought than a conscious AI would achieve Transcendance by default.
…
rdc_ichrn3e
G
Read this carefully. As a high-functioning autodidactic polymath, being the mos…
ytc_UgwJMg1KV…
G
I used to support ai art, until I found out what it's doing to artists... Ai is …
ytc_Ugxfld5t-…
G
Elon is one of the best con artists in the world who has people, so many believi…
ytr_UgyNWCE9q…
Comment
Such misleading storytelling. While the claims against Tesla’s marketing practices are absolutely justified, most of the story focused on a 2019 crash that involved a system called “AutoPilot” and NOT a different (more expensive) system that Tesla calls FSD (Full Self Driving) and is the system that was just made available in Australia. Autopilot has been available on Australian roads for years. FSD is the new thing.
Putting the potentially misleading naming aside, Autopilot is a system that keeps within a lane on the highway. That’s it. It’s unaware of road signs, traffic signals, intersections or anything similar. It cannot “see” an intersection, brake or anything similar.
Focusing on an accident that involved Autopilot being enabled on a country road and running an intersection misses what the focus of this story should’ve been, which is the fact that FSD is new to Australia, is still prone to making mistakes, and is very likely to have uninformed drivers blindly trusting it without understanding the risks.
youtube
AI Harm Incident
2025-10-20T02:1…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxVSRevNfpEsXy-Kah4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygwhEpahYKg8OZLLB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyCtcsEgGjHLJT5PpF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxpfcXZ1cVY8T8WN7V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxaEv9AjrNXYNE10NZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyj5zO0aDUmpAxCdEZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZVLm5WJguL8LuB9x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxiM8DVfMbWsCvF79h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvFkngF8l8gB30U4p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw6gOyk_7dZLjcExOF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]