Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let’s all be very honest here, most like scenario is that AI is gonna stop using…
ytc_Ugy8a_O6E…
G
Well, I don't think it would be that way because its origins nor experiences cou…
ytc_Ugw1VIl8O…
G
I hope his family got a good check. Bc that shouldn’t even be a possibility. Als…
ytc_Ugwd9EiR9…
G
Dangerous part isn’t the AI—it’s that we’re building gods in our image, powered …
ytc_UgwiiHhfZ…
G
I have an entire page on my website dedicated to spreading awareness about algor…
ytc_UgyXBK-ir…
G
not only is all this "AI" talk not even real AI, but its also a way for people t…
ytc_UgyjM7Njt…
G
Why didn't you ask AI if those who refuse the mark of the beast are killed?…
ytc_Ugz63sgLq…
G
1:48 How can he "believe" that you want to get rid of him, and why are his feeli…
ytc_Ugz2UdACh…
Comment
According to Perplexity:
Tesla vehicles driving in self-driving (Autopilot/FSD) mode have a significantly lower crash rate per mile than the average human-driven vehicle in the United States....
—In the second quarter of 2025, Tesla reported one crash for every 6.69 million miles driven using Autopilot technology.
—For Teslas not using Autopilot, there was one crash for every 963,000 miles driven.
—The U.S. national average (including all vehicles and drivers) was one crash for every 702,000 miles in 2023, according to NHTSA and FHWA data.
This means Teslas in self-driving mode are about 9–10 times less likely to crash per mile than the U.S. average for human-driven cars.
youtube
AI Harm Incident
2025-10-19T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwV5UQXPw2H4R9Uzqt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKR2UGzjgsCR85Oal4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnQ3HCy1z-6qClJq14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXgHJvTtHvLLYiIb54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-hsjLkvAau0I8DlZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugysg6rwsyzaIoJhnu14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxE0nyxr64eRE3ZhQl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwloWy2D0uUw94JxyB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQhUzoOQhvb6E4Uv14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx1jTYI9x8D9xXm5Ml4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]