Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yeah, sure, let humans keep deadly jobs and let "AI" invade prestigious fields. …
ytc_UgwB1mk13…
G
Ones programmed to destroy humans. You literally programmed a robot to have a bi…
ytc_UgwifXJ92…
G
I don't need AI to be kind to me. I only need it to do what I tell it to do, and…
ytc_Ugy0Yzgbu…
G
Another thing driverless vehicles will will get rid of - complaining employees. …
ytr_Ugww6NW38…
G
I always say please and thank you to Chatgpt😭 hope they'll spare my life too😂…
ytc_Ugww7UfYd…
G
So sick of these tech dudes. We don't have to participate in the insanity. They …
ytc_UgynHaLYx…
G
If an AI creates a more powerful AI? why would it do so and wouldn't we have tol…
ytc_UgjkdfxV0…
G
are you serious? so ai arts are human art stolen and ai art is made by people w…
ytc_Ugwdkc_qW…
Comment
I suspect the main reason behind the AI's confusion is insufficient dynamic range: the bright lights are swamping out the camera sensors and preventing them from picking up enough details to correctly interpret what is actually before them. As for who's responsible, I'd say it is 50% Tesla for falsely advertising its glorified driver assist as self-driving and 50% the driver for not paying attention when the "self-driving" feature still explicitly requires full attention at all times.
youtube
AI Harm Incident
2022-09-08T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgziLujQpqV31lq0C114AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugyipo8Fi7N_pspGiaJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx8mio1G7v9yWacwPR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWCfeJx3vcws9ylqB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxUHhtSQWyMz1Ubrcl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugws_O5Tmy1dM13uyW54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxD_G9hZkBIUERTTkp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzD4eYO40qE0xJhehJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzCdUKow5rs00K2kHF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzsJkNMMsiZao4LUjV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]