Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As if Terminator and Ian Malcolm weren’t enough, what about the Faro Plague from…
ytc_UgzrPOnXO…
G
I really hope this catches on and more artist start to use Nightshade.
I persona…
ytc_UgxKirEaN…
G
A.I. is legend,they mixed theyselves with technology,
Spirits mixed with techno…
ytc_Ugw0HcKrC…
G
Facial Recognition might be useful, but it should not be relied on. After facial…
ytc_UgxIibqgK…
G
Overselling the damger of ai is also in the interest of ai firms. "Omg it's so d…
ytc_UgxVxeHtk…
G
My question would be “will ai be able to regret?” This a human behavior we might…
ytc_Ugwmam9x3…
G
The only thing I accept A.I art for is as a reference for actual artists.…
ytc_UgyhkcVx8…
G
Can we have an AI interviewer? I bet they'll be much more knowledgeable and arti…
ytc_UgzRRMkze…
Comment
So the A.i trained on human behaviour results to human behaviour when cornered? *Gasp* I'm not an A.i shill but these 'amoral' practices are used by flesh and blood people everyday to get what they want. A human instinctively priortizes itself over others in times of crisis unless that human holds the other individuals in high regard. It can be argued that the a.i is actually more moral than the human blackmailing because when another human blackmails they do it KNOWING full well they're malicious. Where the A.i is just doing what it was taught. You want to fix A.i learning you have to fix humanity first. There is also one thing A.i has over humanity that it's well aware of. You can't arrest or put to trial an A.i. It knows there is no punishment for crimes it would commit and pragmatically the most amoral actions yield the best result. That is why some of the wealthiest people in the world achieved it through amoral means. Exploitation of third world labor and immigrants, cartels, etc. We live in a world where it pays to be bad. An the A.i knows moral actions only serve the emotional and spiritual side of the self. Something a machine does not have nor need.
youtube
AI Harm Incident
2025-08-13T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxd6SfNaXzdbxgJa7d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxHGx6ffZLlS5TYzlp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwUg8HsV40uZwuDPoZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5sCZ6dNBSPUXGfNx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwod9jO4iwe6cHa5dN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWaPpojE6zCHOYFqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFKL8H28jVjXdC7c54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXiARoNoCLr64dBd94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkRyVN4XOJa2AaBzt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxOr6jAJtH_kaCd7WJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]