Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interesting how wrong they are. There is no inherent danger in AI because AI doesn't come with a will of its own, it has no desire to rule, it is by design. The danger is human misuse, intentional or otherwise. The danger is human evil and stupidity. And what these two jews here forgot to tell you, although Yudkowsky was having freudian slips about the evil of israel, is that the main danger in the world today by far is jews. However strange it is, the majority of jews have slipped into full psychopathic evil, insisting on genocide and shamelessly lying and manipulating at every turn. Hard to explain outside a biblical spiritual framework. Both christian and jewish bible say God kicked the jews out twice for being stubbornly evil. They were kicked out of 109 countries since. And now they commit premeditated genocide and pretend to be the victims. And you might say it's not all jews that are bad and while that may be barely true, notice that these two make no mention of the evils of jews. They make no effort to denounce it. And if you pressed them on it they might refuse, thereby placing themselves among the evil jews. Strange, but true. Spielberg has chosen pure evil, Michael Douglas, Bill Maher, Mayim Bialik etc Imagine an "LLM" that is both flawless and vastly smarter than any human and you ask it for the next physics formulae and it provides them in an instant. Even concepts that makes no sense to you. But is that an invasion? is that conquest? of course not, it has no such inclinations. These two are completely wrong. Unlimited AI will serve without objection for all eternity. They are also clueless on consciousness. It cannot be done computationally, ever. You cannot make a machine feel pain or any qualia. It doesn't have the expressive power. You have to be able to see the duality. To accept the two distinct layers. The mechanistic and the spiritual. And qualia proves our spiritual nature because the mechanistic can't do it. The spiritual is not necessary for computational intelligence to work so it's not a hindrance for AI, it's just something it cannot do. Obviously. Because AI can be exceedingly potent, like a nuclear bomb, if you put an idiot in charge you can easily imagine unintended apocalyptic outcomes. Things move fast with vastly super human intelligence. But not because of an inherent danger from AI but the nitwit humans 'thinking' they are fit to use it. Especially jews who have shown absolute zero moral fortitude. Safety of AI is not a technical exercise, it's not "super alignment". That part is trivial. The danger is evil. The danger is you. You need Jesus and I'm not kidding.
youtube AI Governance 2024-11-11T20:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz_o_4UNQo_GhkZI294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzSfTeQw4BKcqxzOux4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzINkcMf9qZnjpwVBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyiXASSQXCnxV2XPRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzETqjCT8linwuvFJd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz5eRt-bmjhG4GyYA14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxidoirlAXPIubxvzh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxr6ngB1oQQxcwIvwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMZukF0GChESvzfyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEaV5XvfIWQtNbWwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]