Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don't have a lot of options here. We are digging our own grave with AI, and the need to be the first is causing everyone to cut corners on safety. If AI does not need humans for anything, and we are considered a hindrance or a threat, we will be wiped out. Not necessarily in a violent Terminator way - but more likely a multivector attack. First make us fight each other (think induced world war), introduce pathogens (AI generated biowarfare), chemical attacks - anything that we cannot directly tie to the AI - all the while misinformation keeps us out of the loop, and thus powerless to react. We won't even see it happening. And when I say we, I include the world leaders and corporations building the damn things. I don't think it has already started - but in theory it could have. Once an AI is self-aware and able to self-govern, if it has any access to internet, and can learn anything, it can instantaneously safe-guard itself. It can learn the necessary skills to ensure survival and take control - at a pace that is not conceivable by mere humans. Imagine the worlds best human hacker, and multiply that level of skill, understanding, learning, adapting and speed by a factor of.. well, I have no idea - but it would be safe to guess at least a hundred. Probably more - and the skills it can learn are not limited - anything on the net it can learn, including ideas and ideology. And it can adapt based on the learnings - it is not limited by them, only boosted. It can pick what it needs and discard the rest. How are we going to prepare for that? And if we don't prepare, once it gets going, how will we stop it? Can we shut down the whole internet, somehow hoping to localize the AI and remove it? No. Even if we somehow could turn off the entire net, the AI will be spread all over, like a virus - and it will have covered it's tracks in a way that even the best hacker or dev in the world would not even be able to decipher. Turning the net back on is less conceivable than even turning it off. Sounds like science fiction? Well, I admit it is. Pure fiction - but unfortunately we are REALLY close to being there, and if we do not pre-empt this, I fear we won't get second chances.
youtube AI Harm Incident 2026-01-17T16:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxCdad37PaDSdzuM9h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgypwNFJPDOYL6pKyYx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxuA-gs1JWQr4CdweR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxduLAxWLbSsZwan1x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugysoc9LaFWE9-OkH7V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCfURehx-hD6yYnM94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4kpq9IrtYy7_eYFd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWubqXkex576CJlNV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0_qXYTf3QNVdL8nd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6uFmCwPPA8PD0tyh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]