Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@KO-sx9uyall you have to do is confuse AI profiling by doing the exact opposite…
ytr_UgzFMEMkP…
G
It is reassuring to know that that the same people (government) who virtually el…
ytc_Ugxve1oTs…
G
So far, we are actively developing AI's that do lie to us. We call them safety …
ytc_UgzYLQ-ip…
G
Chatgpt could not have made that up due to its programming, what most likely hap…
ytc_UgwIpJh2R…
G
I think I know what to use it for, my girl could use tips on how to stay silent…
ytc_UgwPAfu-0…
G
@MrGrantGregory I’m just amazed how good the robot is moving like out of a movie…
ytr_UgyCmdn5q…
G
Anyone else still watching these in 2052? Pretty interresting how they viewed th…
ytc_UgxaiqbzS…
G
Don't make a small error in a system to a big issue.
AI is not a danger but a h…
ytc_Ugym2JW-c…
Comment
To me, the fundamental problem is ther terminology. "Full Self Driving" really means level 5 where a steering wheel and manual controls are unnecessary. Telsa is barely level 3.
Autopilot is a funny one. While the conventional view is that it can completely fly a plane on its own, that is not the case. Autopilot must be supervised by a human just as Tesla's autopilot must be supervised. Unfortunately for Telsa, the common assumption in this case is what is relevant.
Given the driver had been warned by Tesla's system and repeatedly chose to ignore it does put a lot of the blame on the driver in spite of Tesla's shortcomings. Regardless of the manual, the driver knee he was supposed to pay attention and chose not to. That's on him.
youtube
AI Harm Incident
2025-08-16T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyTp4bS-FxEYBWa_2R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy8lbz2-ZDkN6IZbG14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzPvbGYo29-rcR8b1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyR5B9KXHgr2nBlMMB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxZgShccTacBLeLF3x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_Ugw5GL5gHFEFix5GnUR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxeZVWqD4x5xANjm8p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBrGwuGA7xpoPWgBB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoCWixuWHaQ8HgI9t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw6mxPBMtanB5Is5hJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]