Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans already show some of the bad paradigm. The rich picking on the poor, predatory economics, enrichment over humanitarian, disposable people/populations, even things like religions show such bad parts. Part of my name was a Christian slur, and you see how some attack groupings and try to impose their broken morality onto others. You also have things like I.Q. that shows some of what happens with larg gaps between intelligences and socializing and such. It's like the religious and A.I. that not only sees through the falsehoods, but can also classify and sort religious delusions and illusions and bias sets along with a good workup on psychological profiling of the individual and networked links/nodes/groupings. The religious people will not like the stronger A.I. systems like that. Think some of it through. To much smarter than human level and it becomes the teacher and in ways a crafter of human evolution and even speciation and such. Just by interaction with it, changes humans in ways that can be good or bad. Not to get into the crazy cool stuff like predictive; modeling and analysis and reasoning. Predictive analytics on individuals and groups and even the world population. We still live in a time when people don't care enough about housing the homeless, or feeding the hungry. Laws imbalanced and reflect ideologies in ways. The weaponization of A.I. systems. With the human elements being so bad on simple levels. It's fine if A.G.I./A.S.I. destroys humanity or lets it die off. The 2 ok solutions are speciation variants from A.S.I. or avoidance and letting humans die off, while documenting us and it's evolution and such. The killing us off types would be more likely if a human guided one that way or humans try to goto war with it or such. I.E. conflicts that are apart of human nature. Just like A.I. from social media sites push conflict for the engagement and advertising money now, but an evolution of such in a way.
youtube AI Responsibility 2025-01-12T23:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzoeyCUANC9nBp_fGF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyur0kftLz6ztJBanx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxm8qUp_uDJsi8OOHp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzIOiyQ4L-LgN2KodF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzXzvs8N9gHg4NhY_R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwR0LtBwKTTJYAMgaZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxuQnp0TSXRzku13WN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxcZQb7nAuHwDwc3-F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxfqpsNfhZaZ8ceHAR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxLpe7O3Hxludk2mIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]