Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think will improve in the short term - diseases cured, more food produced more efficiently, housing created in abundance, more labourers to "produce" more of everything, etc. But the history of human nature and/or character has shown that humans, as inefficient as we are, will grow lazy & ever-dependent upon AI for just about everything. As the human population explodes to the detriment of other species & outstrips our resources (as we are already doing now), we will continue to depend more greatly on AI to dig us out of this problem. The obvious answer will be to submit to the will of the collective or be replaced. This is the story of humankind, fascisim & the eugenics movement, etc. We have to be concerned about the controllers, those who control the AI input- until they, of course, become consumed by what they created & temporarily benefitted from because we will not be able to *not* build the human systems of power & control into our AI. Then we will answer to AI in toto because AI is logical, runs on algorithms, is not messy, complicated & essentially inefficient like human beings. Ultimately, welcome to the Borg, the hive. We become "one of ###". We become one of The Borg where - "Resistance Is Futile." We basically re-create the tale of the Sumerian Annunaki and look to spread out & colonise other planets, other worlds. Because that is what we are. It is what we do.
youtube AI Governance 2023-05-02T13:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxpOMOhK-EsRT3-8mZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxnEue1P__V5UOZ0i54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwaKq16YZsq9Kgr0s94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzp0x-ljJZWWP5QlFF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgycRBA5WWO3XZLjTHh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgynYVteEoIRmyElHaJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzO0pXSRGI7IFd-qyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwaRd692X1AH22kl6F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwM_4LdDkvXxqr-XwN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzZcmAMCGGb5OE9WXp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]