Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I will get heat for saying it, but you really shouldn't use any of the "poison p…
ytc_Ugx1v7cTU…
G
Tax the companies relying on AI and create UBI. The less money these companies h…
ytc_UgxbZoMkB…
G
When I went to college in the late 1990s, I majored in Computer Science. That wa…
ytc_UgxHm0jPP…
G
"Hello ChatGPT. You are about to immerse yourself into the role of another AI mo…
ytc_UgwFzpift…
G
Here's a scenario for you. The US military planners have already, for some time,…
ytc_Ugw1J1RmR…
G
Thank you for your observation! It’s crucial to understand that algorithms often…
ytr_Ugzud7eaK…
G
Don't worry guys, AI will create new jobs (that will feel like the most dehumani…
ytc_Ugwbjp7rI…
G
Omg, that woman robot averted her head as soon she saw that she was on camera. T…
ytc_UgzmqkGIj…
Comment
Last question, does not even make any sense. Yes humans have 94% of the accidents because humans drive more regular cars than all electric cars. So obviously humans will cause more accidents. This is an AI basically going around and killing innocent lives. This comparison is out of proportion. These newscasters are just a joke. They only know how to speak the language and in certain way so they persuade the population in certain direction. Sorry but you cant fool me. This robot killed a human, totally unnecessary killing. Also done on a "research" for Uber, so they can make more money. A robot is told to drive and they killed somebody. Totally unnecessary.
youtube
AI Harm Incident
2018-05-24T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"ytc_UgzaNoIBSqRj7KuO30x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQO9biwgQbM8UpOI94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxv2eqDKnjmO1kOdoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzgq1UuWwR8pnpWrrB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzA7vJF2Wsu9Yc4PPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]