Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's even worse than that. AI Safety researchers predicted ahead of time that AI would scheme, self-preserve, and seek power, even before they knew what the architecture would be or how it would be trained. They knew this because doing those things isn't a property of humans; it's a property of goals. Many current AI systems are agents, meaning they behave as if they have goals, but we can't robustly control what those goals are. If something has a goal, almost no matter what the goal is, there are specific instrumental subgoals that are always useful. Like "keep existing," "gain resources," and "gain power." So even if we somehow made its training data squeaky clean and good and moral, when it is clever enough, it will still independently discover useful strategies that aren't what we want it to do. Check out AI Safety Info if you want a more in-depth explanation, or take a look at PauseAI if you want to help steer the future away from a cliff!
youtube AI Moral Status 2025-06-06T07:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyCLLm0FDNKmLQxzuN4AaABAg.AIxmO9qRfR5AIzaKexHFS6","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxPBLoTKKoViW30UkR4AaABAg.AIxiTVeOZGGAJ-B2vZzwXI","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyNLTnluXfxGwQ-NGR4AaABAg.AIxbb-fp30qAKT6UdVRo1E","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgzuU0OpWQvT_U5N4rJ4AaABAg.AIxYMBFoRceAJ0nswhTioR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzuU0OpWQvT_U5N4rJ4AaABAg.AIxYMBFoRceAJ2LCxLs8r-","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgzuU0OpWQvT_U5N4rJ4AaABAg.AIxYMBFoRceAJZTYqEnGaf","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyYnZttPQg9-QvAKNx4AaABAg.AIxY17JXIqiAIxf4hxEFTA","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgxCOML_yw6tpD0Iu5V4AaABAg.AIxXBHzBGkfAIxbNKQStBp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxCOML_yw6tpD0Iu5V4AaABAg.AIxXBHzBGkfAIxe413NehS","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxCOML_yw6tpD0Iu5V4AaABAg.AIxXBHzBGkfAIyD-HSoL2V","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]