Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let's abolish capitalism and go from there. Robots/ai is only insidious because …
ytc_Ugxw6WXbQ…
G
Like all ai content it may replicate the style but it has no emotion behind it w…
ytc_UgwEVWbpc…
G
Sort of yes. I see the potential in it also. But I also feel like optimizing for…
rdc_o8afc9t
G
Can not wait for our robot overlord to take over. I'm siding with the robot, scr…
ytc_UgwqZAhuc…
G
I’m a 53 year old surgeon. I think surgery and anesthesia will be the last to g…
ytr_Ugy5jGriD…
G
AI's impact on jobs is real, but knowing our brand isn't missing out when AI sug…
ytc_UgyvhXlHI…
G
I stopped using the em dash because of how people might think I’m an AI.…
ytc_UgxyexKOT…
G
But one can't deny, EVER, that AI "things" are just that. These admittedly sophi…
ytr_Ugzn5INNq…
Comment
Any driver's assist should in every way elevate the safety margins of driving. Tesla loves to compare its features to the "average driver", but I don't feel it should be allowed on the roads, beta tested on unwilling participants, until it's better than ANY driver. If you want to call it autopilot, even moreso Full Self Driving, it shouldn't be allowed to operate on public roads until it's better in every situation than any human possibly could be. Until then, it's not a safety feature at all, it's not guaranteed to elevate safety margins, as it removes a driver which may more safely operate the vehicle from the helm. It's simply a convenient feature to illegally allow texting and other distractions.
youtube
AI Harm Incident
2022-09-03T15:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzzWedFph1jh4Uq1cp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKz-C8Z3WmXfdWFOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwxc7zxL7oFAV0hby94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyJRGqpyfY6h4GEIxt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzgjJVEUeG5Av-SqcB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzrMD2TI58f5ZcRYhB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxtbgeLbcIfMLmcn6l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxOUWofE_I5FaBvcn54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmfA7dV-KyXtmnDB94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGH1zMCDN2pGTiid94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]