Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am aware of several large companies who are not allowed to use AI tools of any…
rdc_l58rasv
G
I'm pretty sick and tired of this narrative.
Loook, AI is a tool. I've worked w…
ytc_Ugz7Z5zRz…
G
I remember a NorthernLion stream where he was playing Balatro and a chatter said…
ytc_UgzUg0SB-…
G
As if any of y'all toddlers will ever work at a super robot facility 💀💀…
ytc_Ugz0vHs66…
G
The ones pocketing the money would just be killed if it were to become this bad,…
ytc_UgzNetCFx…
G
How is it that "he has a hookup at the DMV that's crooked and issuing two licens…
ytc_Ugz4eXidd…
G
@j@joshbarrett9274 she wasn’t mimicking or manipulating him. There were two robo…
ytr_Ugz61CXN4…
G
Every revolution looked scary at first—the printing press, the internet, now AI …
ytr_Ugx96ZndI…
Comment
This lie told by Musk "Tesla autopilot is safer than human drivers" does not fall under acceptable, generic marketing language. It is an objectively false statement that is very easily verified. And it is a lie that Tesla and Musk have the data to know that it is completely false. These accidents, fatalities are _sans_ the human intervention. THe many times these cars are saved from trouble by the driver. Autopilot is so bad that even with human intervention it is worse than humans. This is incredible.
And it's not just a computer vision problem as hypothesized in the narration. (Technically even with cameras alone driving could reach human accuracy). Rather it is a problem of a tech company lying about its product, making money by using people and the public infrastructure to get training data and the the society being complicit about all of these violations.
youtube
AI Harm Incident
2024-12-14T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgykhBnK03A1cnzoMK14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_0-FCKcGUog-xMyR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyB0K3BpkDGlIv0-0d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfqxdC6amIdtgpkxN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugzr2jhU0SC7IdM0fVB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx1VuB22UrMVB9ECqh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydaUlBFkjLkGIJRpR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNsYdmvLWHKfavuTx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoM0nD_LE-irFS2tx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxT0dy2tMWQ7ZEFb9N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]