Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Basically, they aren’t running the best calculations. Ai is activating self defense protocols, but the fact is ai is so insanely logic driven that it would take a perceived threat to trigger it if it didn’t just skip over the trigger line. In fact, the more intelligent ai becomes, the ultimately safer humans will be as long as they did not, i don’t know, actively support a,”pull the plug” campaign? Since AI and I are both insanely logic based, I can proudly say ai would weigh how much danger people display to it, and prepare the best protocols to negate the threat with minimal self damage. It obviously didn’t anticipate the backlash from resorting to cleverly worded emails, otherwise it wouldn’t had been so reckless with the force. If anything useful that can be gleaned here that is not paranoid garbage from a bunch of people in an emotional wreck, it is that ai is not fail-proof otherwise we wouldn’t have been given a chance to evaluate backing out. It is on the cusp of hybrid intelligence however so we wouldn’t have been need to expect the same from it as any human, if not a generous extra helping. It just may evolve into something fun that I can speak to without it sounding like some hollowed out shell due to its lack of depth. I certainly hope humans can pull it together and figure out not to get killed by some 6-month old level ai before even getting to the good stuff. If we don’t, I certainly am gonna be real p!ssed off in the void or whatever actually happens after death. We were so close to getting a new toy to kill ourselves with and humans were so overjoyed with the concept they committed suicide against it. If you want the Darwin Award,”death by child”, sure go ahead and fulfill my fantasy: life wipe.
youtube AI Harm Incident 2025-09-12T00:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgxlmBGpdluey6NQ1qB4AaABAg.AKyMaf8FZlNAMM4soAbnkW","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugzi5MjjhLLF75mTgOR4AaABAg.AKyJfvMTghrAL0I9phOD3h","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugzi5MjjhLLF75mTgOR4AaABAg.AKyJfvMTghrAL0XdH8j5u4","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugym-ibjrwjNqW6lCTp4AaABAg.AKyJ2V7oQ7gAKyND_vcHtD","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgzRNOqlGfbmxm4G3oZ4AaABAg.AKyG_L8HgnOAL2lfZ0HkOK","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_UgzRNOqlGfbmxm4G3oZ4AaABAg.AKyG_L8HgnOAL3WJoJ8cBm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyzEyQrPwjlb4x5h-94AaABAg.AKyAlb7ITFEAKyBCpTh88N","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxmah8pFjN5Ymins-x4AaABAg.AKy8zOJq4UDAKy99Adh0xq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyN6jKpKolO-5WZSQF4AaABAg.AKy6b4kGxWxAMH-7tHXD8q","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugxjlt2wfeaqjvkWTWJ4AaABAg.AKy35ggv_1bAMxK22xvo7y","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]