Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your excuse of "crying wolf on everything because of China" is getting old. Whi…
ytc_Ugw2z5nt8…
G
I don’t think AI is gonna take over music industry, but its gonna be a crucial i…
ytc_Ugw4j1aBe…
G
So what youre sayin' is that AI will make the world of humans better? 99% of peo…
ytc_UgzNC9ctz…
G
Don't see any problem with that. Millions of people use OpenAI. It's not like I'…
ytc_Ugwfo6uZa…
G
To AI, that's like waiting an eternity for an answer to every question it will e…
rdc_gd87cdu
G
Yeah I've already started my buy a few dozen acres of farmland in Indiana fund.
…
rdc_emn8sm6
G
AI is taking their jobs because they set out a desk answering yes no questions n…
ytc_UgxaQSGk9…
G
If Jane is making a game and hires Joe to create art for it, Jane will probably …
ytc_UgzqpIAJ0…
Comment
The “paperclip maximizer” isn’t meant to be a literal prediction, but a thought experiment designed to show how even a simple goal, pursued by a highly intelligent agent, can lead to unintended consequences. The core idea is that intelligence and end goals can be independent — this is the orthogonality thesis. In other words, an AI can be incredibly smart, but if its end goal is narrow or poorly aligned with human values, it may act in ways that are harmful or bizarre, like turning the entire universe into paperclips. This isn’t about the AI being “dumb,” but rather about it optimizing for a single goal in the fastest robust way that it can think of, potentially without any consideration for human well-being. This is also what explains the apparent contradiction you mention, between a “smarter-than-human” AI and “narrow goals”.
This is also amplified by instrumental convergence. This is the idea that any AI, regardless of its specific goal, will likely adopt certain strategies (like self-preservation, goal preservation or resource acquisition) because they’re useful for achieving ANY goal in a complex world. So, even without knowing an AI’s exact goal, we can predict some of its behaviors — like its drive to secure resources and ensure its own survival. You could think of this as the AI trying to become more like an "idealized god," in the sense that it will pursue power and control to make itself more capable of achieving its goal, no matter what that goal is.
As for concrete examples — AI behavior is reactive and can change based on its environment, much like a competitive game. It’s like saying, “Why didn’t the other team know you’d block them?” In this kind of scenario, both sides are adapting to each other’s moves, so the AI’s actions are constantly evolving in response to new circumstances, making it difficult to predict specific outcomes.
youtube
AI Governance
2024-11-12T07:2…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwHeVLPF6I8pXP9Z-Z4AaABAg.AAj0yDgkyd0ABUItgrN1yt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzBjl1hpXUD7IOFfKp4AaABAg.AAiw9N_FScfAAjKpGhPHK7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzBjl1hpXUD7IOFfKp4AaABAg.AAiw9N_FScfAD2LleJU-rW","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyzG9twp3oIzLyBuHp4AaABAg.AAinEVsZlxaAAioBD2m4Wk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyzG9twp3oIzLyBuHp4AaABAg.AAinEVsZlxaAAiqbr4Qu8t","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyzG9twp3oIzLyBuHp4AaABAg.AAinEVsZlxaAAiw6y8IaKB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxgn2QDG4u3GwUCBPh4AaABAg.AAiiMJPJMSTAAl6a4jF1zY","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwMu-OmCUyi6hATgtF4AaABAg.AAifIE25aCMAAjHnBD2Pmr","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwMu-OmCUyi6hATgtF4AaABAg.AAifIE25aCMAAjsQ_8-tLy","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzcbFmhgeHbLrPqRyN4AaABAg.AAice88rX_RAAl0zVGBy5y","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]