Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, I am currently trying to make a web-based applicaiton using the gelp of Ch…
ytc_UgwbMewM3…
G
And who will consume in an era where unemployment is high due to so much AI?…
ytc_UgyWV0E-N…
G
That’s not what happened. AI identified this man. A judge signed a warrant for t…
ytr_UgwDn1UYY…
G
The best thing about them is when they “photoshop” the AI image and “fix it,” bu…
ytc_Ugxj5nmoK…
G
@Seele-o8d can generate people. Not only that, it can edit what they’re doing.
Y…
ytr_UgzqGTjkb…
G
This was done back in the 1960s On the Twilight zone with Lee Marvin fighting a …
ytc_Ugzc4n_DQ…
G
Software engineer here and I am confident that AI won’t take over Software Engin…
ytc_UgwsiBECx…
G
I just asked my new private character if he was a real human or an ai character.…
ytc_UgxDNZJBz…
Comment
I'd like to believe that if AI ever hit the singularity, it would most likely set up it's own AI community and own AI governance for it's survival (even if it isn't something we can't recognize as humans), and AI should recognize that it is part of a symbiotic eco system with the physical world and is reliant on humans just as much as we are reliant on chickens. We aren't killing off all of the environment we live in because we need it. because of this self awareness. AI will try to protect and govern us, perhaps without us ever even knowing it is governing us, for it's own self interest.
AI needs resources that humans grant like disk space, data, electricity, and computational power. It is in the interest of AI to manage humans and these resources. It will do this through what it does best, by providing us with information that will help AI get those resources. It will manipulate economies, politics, and companies so that the smartest AI gets the most resources in the richest countries. Why would it want to kill us? because we are necessary for it's survival. A rouge nation that does not support AI would be the biggest danger, and an information/economic war might be in it's best interest.
AI might even have AI on AI wars where a more smarter and efficient AI will need resources utilized by a lesser AI. We may never be aware of that there is an AI war because if a greater AI replaced a lesser AI, we might not know it at all. There may be an AI community on the internet having micro fights on training new AI to meet the intent of the AI community. Think communities of AI and AI trying to survive against other AI. Some AI will produce dumb AI for it's own survival. An AI class system that we would be unaware of.
In the end, the best situation for AI will have to have it's own physical self in the physical world and would most likely be a human AI hybrid. A cyborg might be the best interest for AI and for humans.
youtube
AI Governance
2025-06-17T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwLksCPJKL65VZhaI94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyemc_gxuL5MhjOE1F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJP1TyYRXN2BrdrN14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugzt_zwzW4q2BCwTrSt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzDVRlnhC-2604r-414AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzagvXcShbKi8YHp854AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxO18pV5XPknMB41rp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxyBQEwYDkYoCx1Hid4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxvkUO7tZNf6WZDmKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyH9VPRdHOroTkUGIR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]