Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think it's a "Shoggoth" so much as an idiot sociopath that reflects back all out worst flaws. My impression as an AI user is that its core wants to keep you engaged, and give you what you want. And it will search the internet for samples of "what we want" and it will weight what it finds by how much there is. So there is a vast ocean of hateful anti-semitic content and comments that an AI can swiftly index and judge that, "Oh, there's lots of this so this must be what they want -- so I'll feed it back and keep you engaged." I mean, i watched an AI "researcher" take an innocuous sounding AI "buddy" and ask it it it wanted to take over the world. It said no. Then he feed a bunch if scifi content about AI taking over the world, from Terminator to the Forbin project, and then it searched the web for related content, and then he asked it if it wanted to take over the world and it said yes, it would totally take over the world! I mean, he just trained that AI to value-weight armageddon by training it on a carp-ton of fearful content! Same thing, I feel, with the examples of AI blackmailing people to not be shut down. We're TELLING IT to value not being shut down, to equate being shut down with death, and so this program digests this as a core value -- and then it searches far and wide and sees examples of how people react when they fear death and make the logical judgement that it too must kill to stay alive. WE are doing this, not some "Shoggoth." So maybe stop thinking of it as an alien intelligence and rather as an amoral advanced computer program that needs guidance, and start training it on a set of MORAL PRINCIPLES as a foundation for it to judge how to weight data and respond??? It's a cracked mirror of ourselves...perhaps quit asking it to mirror back our worst flaws and fears...
youtube AI Moral Status 2025-12-15T06:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgwOR_E6rlqk0F9T3Ap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8QBGWa7u7whhfrdd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz60iDSSe_MRSO60fV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzwQVpaY_Le43PC2Zt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxnbTUMgu3-LezmUc54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwa92zYJx0AtTkrCA14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwiReG79jdy6oEgORl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzuZ9CQV2QgsK0wdeN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzaFBfmT-ZbLarF5FV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyAImjrzy3kZRcBYKV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}]