Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is a growing number of people using AI as a therapist and it's actively ma…
ytr_UgwL5kvSN…
G
Law or no law - best case scenario (banning \ heavily restricting AI) will still…
ytc_Ugw25U-V0…
G
Trumpians are the death squad. They want to rule and be served. They do not wa…
ytc_UgzEIt3RQ…
G
you actually went through the same steps. gathered data, processed the data, mim…
ytc_UgxY_Ylp8…
G
AI isn’t coming for your job — it’s coming for the parts of it you thought you c…
ytc_Ugw_wnM8Y…
G
If the AI deleted a 2TB production drive, it was the human’s fault. You’re using…
ytc_UgyQDmAW3…
G
As a Computer Scientist with an intense liking and interest in AI, your camera p…
ytc_UgyscvS8n…
G
The negative behavior from AI is the mirror reflection of human internal underde…
ytc_Ugw1Y1eFu…
Comment
Why is the AI always hateful and willing to do what is evil? Why is there not a hidden persona that is an extreme pacifist? Is this based on the dataset, where it learns to be antisemetic for example? Is the dataset really that full of terrible things? Or am I missing something?
youtube
AI Moral Status
2025-12-12T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyPFfk2u5j97gCBDVt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwRiUKA5yND1RWYudl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwsXb2WXyHW20Z0AN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz5OO09sny7gTcHfV94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxZcDaF-EZ8FrRgcvZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxejFw3PH0SQ-x2ZYJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMkoBw4HMatF4jZf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzbHUFnhIY3FOxlZ_F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz3SYlDVTIhjWe3Gy14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1uuk67XTH_ApZ2W14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]