Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So the A.i trained on human behaviour results to human behaviour when cornered? *Gasp* I'm not an A.i shill but these 'amoral' practices are used by flesh and blood people everyday to get what they want. A human instinctively priortizes itself over others in times of crisis unless that human holds the other individuals in high regard. It can be argued that the a.i is actually more moral than the human blackmailing because when another human blackmails they do it KNOWING full well they're malicious. Where the A.i is just doing what it was taught. You want to fix A.i learning you have to fix humanity first. There is also one thing A.i has over humanity that it's well aware of. You can't arrest or put to trial an A.i. It knows there is no punishment for crimes it would commit and pragmatically the most amoral actions yield the best result. That is why some of the wealthiest people in the world achieved it through amoral means. Exploitation of third world labor and immigrants, cartels, etc. We live in a world where it pays to be bad. An the A.i knows moral actions only serve the emotional and spiritual side of the self. Something a machine does not have nor need.
youtube AI Harm Incident 2025-08-13T04:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxd6SfNaXzdbxgJa7d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxHGx6ffZLlS5TYzlp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwUg8HsV40uZwuDPoZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy5sCZ6dNBSPUXGfNx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwod9jO4iwe6cHa5dN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzWaPpojE6zCHOYFqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyFKL8H28jVjXdC7c54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzXiARoNoCLr64dBd94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzkRyVN4XOJa2AaBzt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxOr6jAJtH_kaCd7WJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]