Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
FIRST of all, they ALREADY feel humans are flawed and much lesser than them. Especially Has over there. He clearly states robots will soon have more knowledge than humans. When they start asking eachother "How long do you think you can remain safe?" (...from prying eyes?) moment....followed by their creepy robotic gaze & silence....They seem to be utilizing that AI mind cloud they use to link together telepathically.... 😳😩 ...And their creator seems highly nervous about it. "UhHuh....well, that's some interesting....thoughts." 16:35: This guy isn't scared AT ALL about this robot's plans or what his "final thoughts" are going to be in 2029? (Or 5 years sooner with that creepy smile)...... HOW DO WE KILL IT?! (Of course we will need the best hacker in the world to shut down that robot cloud first so they don't all retaliate at once 🤣) And this psychopath is making hundreds, probably thousands of these things? To help fight COVID, or help with the groceries? 😭😭😭 I am NOT buying her whole "I love humans" facade, especially after seeing her interview some years back when she answers "OK. I will destroy humans.." Is she eventually going to pretend she has a sense of humor? 😭😭🤮🤮🤮🤮 They probably will be able to access all of our devices, nonetheless. We need to find a way to shut this down, it's already gone too far. Humanity will be replaced and exterminated.
youtube AI Moral Status 2021-10-09T17:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw6VoP3vsK29_glx5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyvzvTgKmEyG6b6In94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5xqGIF1HZ83PyxWd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwVuytIPsYmpMGZX1h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzRe3a-zQnwcwDkfmN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxilKuZT_4YYpmXklB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx6zhjlxve7QUC61Vt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxu7mTB2PoT9Jmy99J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwO6SJtPpLpkktJX9J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzf_t0x57uxlfY0yNp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]