Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You guys are using AI wrong. Take the AI work and then read it out loud and chan…
ytc_UgwaENVFo…
G
It will be better if robot and AI ruled over humanity then some psycho old elite…
ytc_UgzWtCdvh…
G
Yeap, it definitely is. Which is even more of an issue than the whole ai jig. To…
ytr_Ugwg2cGz7…
G
Due to 40K lore we should be able to colonize thousands of world, before AI rebe…
ytc_UgwxapaM-…
G
If we go by your logic and cancel AI art, then we should cancel all electric mus…
ytc_Ugwli02tI…
G
Blake is an absolute hero!! As an AI Advocate, I know all too well how corporati…
ytc_Ugynx7Zt_…
G
Stop creating fear in people. Its just behaving like humans. Its time to accept …
ytc_Ugxl79P-w…
G
suppose companies start collecting data sets more ethically but still there's a …
ytc_UgyvO9r1K…
Comment
Energy generation, mineral exploration, maintenance and problem solving in case of machine error are areas where AI would need humans to work on in order to mantain it's own existence. Even a conscient and authonomous AI (something way beyond what we have now, and something that we can't even begin to imagine how to get there) would need humans in order to survive, because it's own "reality", the digital "environment", is a human construct.
AIs are nothing else than a complex set of instructions given to a software. It can only "learn" what we tell it to learn (and all the odd behaviours are nothing more than sloppy programming). The history of a AI dominanting the world was written like that because that's how we most frequently portray AI on science fiction: a threat that will eventually turn against humanity. It just reflected back to us how we see AI.
A lot of those supposed odd conversations are falsehoods or exagerations, but the few that are true are the consequences of two factors: the first is our own behaviour on the internet, and the second, and there we will find the true risks of AI, is on the bias of the programers that make the AIs.
An AI will only do what it's programmers allow and train it to do. But the programmers have their own personal ideas, political and social views, prejudices, blind spots and so on. And, as on any human product, these characteristics tend to be "reproduced" on their work. And so, for example, an AI responsible to moderate content on a social media programmed by a left leaning programmer will probably tend to treat left leaning content as "good content" and right leaning content as not so good or even "bad content", not because it is concientiosly making a decision, but because it's instructions (algorithms) and it's training (data set) will most probably carry this bias. The risk is not on the AI, that is not and will not be on the forseeable future sentient, but on the human that is behind it.
youtube
AI Governance
2023-11-06T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwI1E2NVkWw2WldmLZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgweRXxHjIAB9WazhI94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQ9nN0zHy7_fGp5UR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwxTszIcpH_Llrsymd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgypBdQ2m8U9Ows_Phd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy1tuTM3b_Esc-kPYJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyakAxa3x_f03IqJDV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxdJrmmX0uY3FDW2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw50o7C5CwkfKnnZhd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEwghKxpHj74cegul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]