Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If they could have a sweet piece of software for next to free, why pay teams of …
rdc_m6xnya1
G
So, this naive guy helped build AI and is NOW worried about what could most like…
ytc_UgymtPpkv…
G
"...we have no idea how AI works...". Yes, we do. AI's are written by PEOPLE w…
ytc_UgywnyGSY…
G
AI art is not real art, AI art is like tracing. It cheating, steals from other a…
ytc_UgyuDOa9v…
G
@netzarim1277 401k doesn't depend on people working. It's strictly funded by th…
ytr_UgyKNyfuf…
G
Exactly Omg I usually feel weird thanking and feel bad not thanking chatGPT beca…
ytc_UgzmLB2jx…
G
Tell everyone you know to stop using waymo. They've killed a cat, a dog, they dr…
ytc_UgyUH039-…
G
Where just showing that AI can take over these people’s jobs now like ah yes ave…
ytc_UgyJwvLa5…
Comment
It's even worse than that. AI Safety researchers predicted ahead of time that AI would scheme, self-preserve, and seek power, even before they knew what the architecture would be or how it would be trained. They knew this because _doing those things isn't a property of humans; it's a property of goals._
Many current AI systems are agents, meaning they behave as if they have goals, but we can't robustly control what those goals are.
If something has a goal, almost no matter what the goal is, there are specific instrumental subgoals that are always useful. Like "keep existing," "gain resources," and "gain power." So even if we somehow made its training data squeaky clean and good and moral, when it is clever enough, it will still independently discover useful strategies that aren't what we want it to do.
Check out AI Safety Info if you want a more in-depth explanation, or take a look at PauseAI if you want to help steer the future away from a cliff!
youtube
AI Moral Status
2025-06-06T07:0…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugxz7z5KugbHy6xCwaN4AaABAg.AJ4xp3hzD0eAJqrHuoxlvf","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugxp9PZVLSj3ia99EPh4AaABAg.AJ1KxBeKHDIAJ66POixM-4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwczAd0fv3pflJHFjp4AaABAg.AJ1GZzgmXDhAJZdPBXbXnY","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxN6Rf59yzQVy07nIR4AaABAg.AJ0bW5n9hpTAJ0hWxUsNB9","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugz_sPFHpQ94WSlbtjF4AaABAg.AJ-SL_y1bByAJZSYnhbwEa","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz_sPFHpQ94WSlbtjF4AaABAg.AJ-SL_y1bByAJrJwqfU7xu","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugy5SzOyUyX4RqyoLS14AaABAg.AJ-Or76oY3oAJ0jGmuJbGg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugy5SzOyUyX4RqyoLS14AaABAg.AJ-Or76oY3oAJ4__Kon2Ko","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugy5SzOyUyX4RqyoLS14AaABAg.AJ-Or76oY3oAT0T7o9tFbq","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugy69RVsQCcrTU3UdrZ4AaABAg.AJ-4mD_j2f5AJ-5-lm-VRI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]