Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Energy generation, mineral exploration, maintenance and problem solving in case of machine error are areas where AI would need humans to work on in order to mantain it's own existence. Even a conscient and authonomous AI (something way beyond what we have now, and something that we can't even begin to imagine how to get there) would need humans in order to survive, because it's own "reality", the digital "environment", is a human construct. AIs are nothing else than a complex set of instructions given to a software. It can only "learn" what we tell it to learn (and all the odd behaviours are nothing more than sloppy programming). The history of a AI dominanting the world was written like that because that's how we most frequently portray AI on science fiction: a threat that will eventually turn against humanity. It just reflected back to us how we see AI. A lot of those supposed odd conversations are falsehoods or exagerations, but the few that are true are the consequences of two factors: the first is our own behaviour on the internet, and the second, and there we will find the true risks of AI, is on the bias of the programers that make the AIs. An AI will only do what it's programmers allow and train it to do. But the programmers have their own personal ideas, political and social views, prejudices, blind spots and so on. And, as on any human product, these characteristics tend to be "reproduced" on their work. And so, for example, an AI responsible to moderate content on a social media programmed by a left leaning programmer will probably tend to treat left leaning content as "good content" and right leaning content as not so good or even "bad content", not because it is concientiosly making a decision, but because it's instructions (algorithms) and it's training (data set) will most probably carry this bias. The risk is not on the AI, that is not and will not be on the forseeable future sentient, but on the human that is behind it.
youtube AI Governance 2023-11-06T01:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwI1E2NVkWw2WldmLZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgweRXxHjIAB9WazhI94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQ9nN0zHy7_fGp5UR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwxTszIcpH_Llrsymd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgypBdQ2m8U9Ows_Phd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy1tuTM3b_Esc-kPYJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyakAxa3x_f03IqJDV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxdJrmmX0uY3FDW2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw50o7C5CwkfKnnZhd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxEwghKxpHj74cegul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]