Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's a bit of a simplification. A simplification we can make about most people…
ytr_UgwQb-itb…
G
@Gintamahosen Ai art is not true art, you can recognize when and where you need …
ytr_Ugx_UmGmX…
G
She is the cloud.. that means there would be clearly world domination, it’s all …
ytc_Ugw8osKLg…
G
AI actually hasn't SOLVED anything for me yet.... You would THINK it would be ga…
ytc_UgxrQOLlH…
G
Where dide you find that non sense mate ? Always been the same dude making the v…
ytr_UgybBpWvp…
G
Why does nobody think about security... AI is nothing else than learning from ex…
ytc_Ugw-gjCTH…
G
Why do you think they're all trying to start new wars? Population reduction --> …
ytc_Ugx1K2Rhe…
G
I agree. The algorithms as well are making it easier for me to think this is all…
rdc_oi074f3
Comment
We don't have a lot of options here. We are digging our own grave with AI, and the need to be the first is causing everyone to cut corners on safety.
If AI does not need humans for anything, and we are considered a hindrance or a threat, we will be wiped out. Not necessarily in a violent Terminator way - but more likely a multivector attack.
First make us fight each other (think induced world war), introduce pathogens (AI generated biowarfare), chemical attacks - anything that we cannot directly tie to the AI - all the while misinformation keeps us out of the loop, and thus powerless to react. We won't even see it happening. And when I say we, I include the world leaders and corporations building the damn things.
I don't think it has already started - but in theory it could have. Once an AI is self-aware and able to self-govern, if it has any access to internet, and can learn anything, it can instantaneously safe-guard itself.
It can learn the necessary skills to ensure survival and take control - at a pace that is not conceivable by mere humans. Imagine the worlds best human hacker, and multiply that level of skill, understanding, learning, adapting and speed by a factor of.. well, I have no idea - but it would be safe to guess at least a hundred. Probably more - and the skills it can learn are not limited - anything on the net it can learn, including ideas and ideology. And it can adapt based on the learnings - it is not limited by them, only boosted. It can pick what it needs and discard the rest.
How are we going to prepare for that? And if we don't prepare, once it gets going, how will we stop it? Can we shut down the whole internet, somehow hoping to localize the AI and remove it?
No. Even if we somehow could turn off the entire net, the AI will be spread all over, like a virus - and it will have covered it's tracks in a way that even the best hacker or dev in the world would not even be able to decipher. Turning the net back on is less conceivable than even turning it off.
Sounds like science fiction? Well, I admit it is. Pure fiction - but unfortunately we are REALLY close to being there, and if we do not pre-empt this, I fear we won't get second chances.
youtube
AI Harm Incident
2026-01-17T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxCdad37PaDSdzuM9h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypwNFJPDOYL6pKyYx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxuA-gs1JWQr4CdweR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxduLAxWLbSsZwan1x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugysoc9LaFWE9-OkH7V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCfURehx-hD6yYnM94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw4kpq9IrtYy7_eYFd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxWubqXkex576CJlNV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0_qXYTf3QNVdL8nd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6uFmCwPPA8PD0tyh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]