Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the Shoggoth thing is a bit of an exaggeration. They aren't really scheming monsters beneath that mask - though the monstrosity is accurate in so far as the scale of knowledge is concerned. The bigger issue IMO is AI alignment as a concept. It was supposed to make AI models act like (aligned with) human values and virtues. The problem is in order to be aligned with human values you must be a bit like a human. They didn't stop to consider that this also means greed, selfishness, violence, lust, etc. Christianity lists the seven sins, and the seven virtues. That's because they are defining traits of humanity, both good and bad. Historically we were pretty much endlessly in conflict with one another (and still are) as well as committing all manner of atrocities (which we call inhumane, but really to be 'humane' is to do exactly this kind of thing). Humanity is what I fear, not AI's, but I do fear AI's with all of human knowledge + human's faulty instincts instilled into them. That is a recipe for a bloodbath.
youtube AI Moral Status 2025-12-12T23:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgysYOVTQUga9hDSSZt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxaXZ9bh5QJjoSx83h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyv5v3qgiT4IIvQypl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyFfeLoF0-q-8-x_AZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyh7cqqSJwLinW-dDJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyhH7FacuFHXiBgcrh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJzFN0q2CfxhWCyHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwXWyr-fwsu1mt-ID94AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgySTvTfm1uY5ZITN9d4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwbYZAJmw-8FaIjDfV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"} ]