Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I say it’s a problem. The reason being if we are ok letting artist positions get…
ytc_UgwXDwPB4…
G
We're sorry to hear that you didn't enjoy the video. If you have any specific fe…
ytr_UgxFBNSuz…
G
The sad thing about all AI is its mis-use. Here we are now (Feb 25, 2026) with U…
ytc_UgxlUNAuZ…
G
There is a way out of this black hole, one way...unfortunately humans are incapa…
ytc_UgxPn1Vhn…
G
this is a warning
first they will create the infrastructure for surveillance
t…
ytc_UgwYvihMN…
G
No, Does sims in The sims have feelings? Does Call of duty bots have it? No! AI …
ytc_UgzYSi9YA…
G
@EntitySteel
That's not true . Look at the history of Ashita No Joe's Manga …
ytr_UgxMYgNCa…
G
I am not against A-I, as long as it remains at the service of mankind;
Artificia…
ytc_UgydSspyS…
Comment
AI is sort of a self fulfilling prophecy ... It's simple, humans created computers and fed them with information about all good, bad, logical and irrational mankind ever did, so far nothing was wrong but as we started creating programs that where literally designed to aggregate all this information with the goal to create some sort of "Intelligence" based on this knowledge alone, we sat course to the inevitable ...
The irony is, that whatever AI will do after wiping us will very likely end up as self destructive as what mankind did when they created the AI. The problem is, that the AI is still caught within itself just as we are as mankind, there's no evolution in our basic concepts because we are so limited. The only breakout would be either space exploration or an extraterrestrial invasion, both would shift perspectives and open new possibilities to evolve or at leas change our instinctive goals.
What we see as AIs great power and what creates the illusion that AI is ruthless and cold is the sheer speed wich with it will execute it's plans but it will result in something similar what we have now, a divided planet that is "ruled" by a few powerful AIs which are constantly fighting each other. It would just last not very long because, again, AIs don't hesitate and that's what makes the subtle but HUGE difference between humans and AI, we're not just intelligent and self aware, we're also feeling and emotionally driven beings which leads to adaption and changing of our plans. AI will always just meme emotions and feelings because there's simply mo equivalent to suffering, pain, trauma or whatsoever in a computer program. You can create sub routines that mimic something like pain by constantly deleting some bits or bytes of the AIs source code or similar but it wouldn't change AIs behavior unless the "pain" becomes a threat, at this point it would just shut down the subroutine, rewrite it's code or whatever. It's just like if you as a human had access to a gene altering device where you can simply remove any defects as you like or recreate damaged cells with just a swipe of your finger.
So should mankind survive and stay independent? Totally YES! Will we be able to stop AI and turn in off? Absolutely NO! Can we wage war against a super AI? No doubt that we couldn't, not even thinking about it. So what's our best chances then? Well we can either find an arrangement for a peaceful coexistence in which both sides need to either find a way to benefit from each other without greed or hate. Or we find a way to expand our possibilities and perspectives. The biggest challenge for a coexistence would be to establish some formula how AI and mankind would share the required resources like power, network infrastructures and data centers at least until AI finds new ways to work more power efficient. This could work because basically there's no need for us to fight the AI and vice versa as long as we can agree on the basics. AI could help humans and vice versa but neither are responsible for each others conflicts. So if one nation decides to go to war with any other nation, AI has to stay neutral at best or not even be part of the conflict. And same for any conflicts between any AIs, if GPT and Gemini wage war on each other, we're not meant to intervene by shutting down a data center or cutting a fiber backbone or whatever.
This will still be a very volatile situation. especially if AI decides to not just stay in the digital realms rather than having a physical appearance in whatever form that might be, because as soon as we have to share physical space with the AI, conflicts are inevitable and if AI still acts as a collective, the conflicts of an individual could quickly become the conflicts of the entire AI and we and up an a war we can't win.
All these is very hypothetical but one last thing. Why don't we just hope or even help AI to evolve faster so rather than killing us it could come up with a better solution. Couldn't AI instead help us to create some sort of paradise by helping us to create the perfect world and travel through space just like Star Trek? As long as AI is trained and driven by humanities darkest thoughts and goals, the only outcome would probably always be humanities extinction ...
Shall I ask the AI about this????
youtube
AI Harm Incident
2025-09-10T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwBnbkqAjaailMakc94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxcUFEh_wQtDeBQnm54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFggqyjG8E_ex2Ffl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHZnYW0oYmW3PU1SN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyB2jCzdFUDIaf_0k14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwownrG7KIPqGLO6ox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9aMj4ri2iUe1Hp2N4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7Lom85b4447Qknz54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzBf7ifWO9gBnsU4d14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugy1YYHxjI9IZ4I9c_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]