Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just saw one the other day where a lawyer for a moment thought his case was abou…
rdc_nm9v3td
G
I"m pretty sure, this guy and the female robot were on silicone valley show.. ha…
ytc_UgybM9JyX…
G
>A year later, however, the engineers reportedly noticed something troubling …
rdc_e7keda1
G
God these interviewers were horrible. They did not understand enough to just shu…
ytc_UgzXL4YW8…
G
Yes, autonomous cars have the potential in the medium and long term to improve s…
ytr_UgxtvW2bY…
G
His point is we’re going to hit a singularity soon with AI. Point is no one know…
ytr_UgwzRqvVo…
G
AI already knows how stupid humans are with just plain facts. How stupid we are …
ytc_Ugy0OpRNm…
G
In the very near future all online hiring will be an AI employee clone talking t…
rdc_n6renzg
Comment
It's even worse than that. AI Safety researchers predicted ahead of time that AI would scheme, self-preserve, and seek power, even before they knew what the architecture would be or how it would be trained. They knew this because doing those things isn't a property of humans; it's a property of goals.
Many current AI systems are agents, meaning they behave as if they have goals, but we can't robustly control what those goals are.
If something has a goal, almost no matter what the goal is, there are specific instrumental subgoals that are always useful. Like "keep existing," "gain resources," and "gain power." So even if we somehow made its training data squeaky clean and good and moral, when it is clever enough, it will still independently discover useful strategies that aren't what we want it to do.
Check out AI Safety Info if you want a more in-depth explanation, or take a look at PauseAI if you want to help steer the future away from a cliff!
youtube
AI Moral Status
2025-06-06T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyCLLm0FDNKmLQxzuN4AaABAg.AIxmO9qRfR5AIzaKexHFS6","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxPBLoTKKoViW30UkR4AaABAg.AIxiTVeOZGGAJ-B2vZzwXI","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyNLTnluXfxGwQ-NGR4AaABAg.AIxbb-fp30qAKT6UdVRo1E","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgzuU0OpWQvT_U5N4rJ4AaABAg.AIxYMBFoRceAJ0nswhTioR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzuU0OpWQvT_U5N4rJ4AaABAg.AIxYMBFoRceAJ2LCxLs8r-","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzuU0OpWQvT_U5N4rJ4AaABAg.AIxYMBFoRceAJZTYqEnGaf","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyYnZttPQg9-QvAKNx4AaABAg.AIxY17JXIqiAIxf4hxEFTA","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgxCOML_yw6tpD0Iu5V4AaABAg.AIxXBHzBGkfAIxbNKQStBp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxCOML_yw6tpD0Iu5V4AaABAg.AIxXBHzBGkfAIxe413NehS","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxCOML_yw6tpD0Iu5V4AaABAg.AIxXBHzBGkfAIyD-HSoL2V","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]