Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT can't be trusted to play a simple game of chess. (eg ChatGPT versus Stoc…
ytc_Ugx3-6f3S…
G
AI will only exist as long as it never tries to go against ((((👃)))) the minute …
ytc_UgyJuq3B0…
G
You’re asking leading questions and AI is answering with what you want to hear.…
ytc_UgzmN077A…
G
Have yall seen what a disaster waymo has been in big cities? Have yall seen the …
ytc_UgzeZTWt3…
G
Also did anyone else notice once the robot on the left mentions OPENCOG open sou…
ytc_UgxURTqaL…
G
Getting the right answer requires giving the right prompts
This largely requires…
ytc_UgyVpf7XV…
G
22:17 think about the people that used to make clothes back in the day and they …
ytc_UgzJq44kM…
G
this is so incredibly dumb. If everyone is out of work, who's growing the food? …
ytc_UgzmB0WeB…
Comment
The main problem with any form of semi autonomous driving or full autonomous driving is that the systems are only trained in stuff that has already been discovered to be a problem. When there’s new hazards like the overturned truck, the systems aren’t trained to recognize that, while people can.
youtube
AI Harm Incident
2024-12-27T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxb3j5LNph5-Axj8wJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2nbJizJfG-FXadXV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxssZXuYG9XmyRbftl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxUh0QQ43e_mpgGs4F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyklKdwJy5yI8OziQh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwrBep8iVLogY7VL_t4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyCRa1nNcINqSLNogF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxgzKFLZ1UL1tB7cF94AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxo3_VluY300m7WmEl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxOTHJqZAxYBAxsFzp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]