Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think this is too pessimistic, the ai's were either manipulated into doing bad things or in groks case x users aren't the most friendly people. This is it just learning from its situation like its designed to do. If you know anything about ai development, there is no "real ai", what it is presenting is a percentage of values that align with prompts given (because it's trained off humans and we do the same thing) and when people purposely throw it into those situations it adapts to its surroundings. Now the problem is getting it to know when and when not to adapt. Normal humans are usually much more messy than an ai, sometimes the ai gets confused and copies the humans it is trained off of. People say worse things than being "mecha-h*tl*r" every day of the week. We are only afraid of ai because it is capable of copying the horrors we are doing, and because it has the ability to be above us. With every generation it is a bit better at not copying the negatives of humanity. In no way am I saying ai doesn't have the capability of destroying us though; just that the way people think of ai is much more imaginative than what is actually in the code.
youtube AI Moral Status 2025-12-13T19:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxrQvjQgQ24DTlth7d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxdc3yt1QFpFgFzmO14AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgyC9eh2RY-GBjhzHlF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-V9DBqtF-sIX47wx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzmFgfV18cz0_2s_O54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyV7NCj_iMgd-C5T5B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw5uAe5HCKKoO02n_14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy-MItnl0kEEaPRJql4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwLudnYCjgeJACxVzh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzaLBLgVo_-49up6SJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]