Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nobody is born with skill, go ask a baby if they can magically draw the Mona Lis…
ytc_UgzE_o8EF…
G
I was so disappointed when i saw that ai miku because it looked so good, but see…
ytc_UgzH9UT14…
G
if ai bros are so proud of using AI and its slop and how it's apparently better …
ytc_UgxDv7aQz…
G
Well if my current job in tech gets taken over, I think I'll be able to fall bac…
ytc_Ugx6gGG7F…
G
As an artist and an AI researcher, I completely agree with you about the problem…
ytc_UgxBw17I_…
G
Maybe from now on artist have to have the right to disclose their unconsent of A…
ytc_Ugy16jSgU…
G
The YT algorithm is pure garbage, more so for politics, because I am middle, and…
ytc_UgydE9NMf…
G
AI surely can't take the place of headless Godi Media. They're irreplaceable in …
ytc_UgwgQ9i79…
Comment
@JonAbrams-xt4tq
"Yes, but even so, the vehicle should break, slow down, swerve, etc. Not continue to plough on into an object it doesn't recognise."
It can only do those things if and only if it knows that there's an object it can't recognize. The main problem is, or rather, _the_ problem, is that the AI system doesn't know that it doesn't know that there's an object it can't recognize. It's basically the most dangerous epistemic position: It doesn't know that it doesn't know. When the same object is correctly recognized by one camera as a car/human/obstacle/etc, but is not recognized as such by another camera (or worse, not recognized at all, like shown in the videos above), the AI system is supposed to be programmed to take safety measure by slowing down the car, swerve, etc, not keep speeding and crash into the obstacle. Defensive driving should be programmed into the AI system.
youtube
AI Harm Incident
2024-12-18T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzSj3kRkWtIs3S-qJx4AaABAg.AC4eU9b6QxsAC5z43Yz08N","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugzjs8ElzNeScbyOq514AaABAg.AC4UQc8N4vSACBfzYP9gq4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyemoJI3SVCqZA97VB4AaABAg.AC4U9xMb5FTAC4g9QTyLbq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyKyJBa8rx4fvELK3t4AaABAg.AC4SoXQhu82AC4gqUM1GIF","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxAfPJGvEe1cjk407l4AaABAg.AC4Mj4tqlonAC4hUI3GOM1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgytfO17SYHCRxplFnR4AaABAg.AC4KRYUT1tHAC6njgm2MvX","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgytfO17SYHCRxplFnR4AaABAg.AC4KRYUT1tHAC7LEJMgEsE","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzh9VqdZi0cDwBweZ54AaABAg.AC49RC-HW_9AC553t34pUl","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugz-Bf5u1_ALN1y87Vt4AaABAg.AC45P2RfYOVAC4xCraJNCQ","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgzGKMg2CNfqNaXYQ2l4AaABAg.AC43S2kHDs0AC536eNj0X9","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"resignation"}
]