Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI solves longevity. Humans live for thousands of years. AI can not kill humans …
ytc_UgxWhLUA-…
G
Meet Ava Credit Building (Reports to ALL 3 Bureaus):
https://thecreditexpert.ti…
ytc_Ugyf-CbDp…
G
Just plug in a bunch more specifics into the prompt and you could probably get s…
ytc_Ugw_oN6n5…
G
@goldendaygecko7435 there's not copyright on drawings that you make at the mome…
ytr_UgxTsaBeI…
G
Thank you for your comment! In the video, Sophia was actually discussing the mea…
ytr_UgzplNDEh…
G
The police must have utter contempt for the people that they serve. No rational …
ytc_UgwcY16lB…
G
AI Art are actually very easy to spot, without getting into the quality of the i…
ytc_Ugy4Nmh9e…
G
I wonder what an automated truck will do if a police car attempts to pull it ove…
ytc_UgxAlaovd…
Comment
Tesla’s Full Self-Driving (FSD) is safer than the average human driver on a per-mile basis, according to Tesla’s internal data and independent analyses of crash rates. As of Q2 2025, FSD achieves 1 crash per 7.63 million miles driven, compared to the U.S. national average of 1 crash per 670,000 miles—a ~11x safety improvement. For fatalities, FSD’s rate is ~1 per 10–15 million miles, versus the U.S. average of 1 per 94 million miles (still ~6–9x safer).
youtube
AI Harm Incident
2025-10-22T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwvPLhlRk0qSqQjXrx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuAZSgaG7ls2Mw37Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcURLOJPFcbmfwhCp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwxewlIiwb4oT14LY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyejCoE2dQxEafIaJB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw6EC-bQnwDazllkEp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuNtJaaFEwatvsQ5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyux2RlLLKNjE0v8ON4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKm5qAFE2OiEgF0QR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSzEh_OTKgKQmy2fZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]