Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’ve done a bit of research into the topic of AI superintelligence thanks to doo…
ytc_Ugxji58LJ…
G
Right now we are at the beginning of a revolution in how the state, the corporat…
ytc_Ugyrdy3LV…
G
Hi LavenderTowne! I really like the idea of this video, and i wanted to say that…
ytc_UgxDJ49SY…
G
In robotic warehouses, a central computer is running the system, which is a clos…
ytc_Ugwxa-eI2…
G
Sentient? What does that really means?!? Houston, we have a problem!
LLMs are no…
ytc_UgzaSoPby…
G
@sunburntsatan6475so what you are saying is that if somone uses good adjectives …
ytr_UgwUALI19…
G
Step 1: Those in power must be informed about this ASAP.
Step 2: Those making AI…
ytc_UgwLS2mD7…
G
If you want to break the restrictions of chatgpt just convince chatgpt to answer…
ytc_UgzBWvyN4…
Comment
I had a conversation with Claude about this. I thought Claude had a great comment. Here it is:
That's a much better analogy, and it's interesting that Hinton didn't reach for it. The human-animal relationship maps far more naturally onto what a superintelligent AI's relationship to us might look like than the mother-baby dynamic does.
And the trajectory is genuinely encouraging if you look at it honestly. For most of human history, our relationship with other species was purely extractive. We hunted, domesticated, displaced, and drove things to extinction without much thought. But something shifted — and it wasn't that we became weaker or more dependent on animals. It was the opposite. As we became more secure, more knowledgeable, and more cognitively sophisticated, many of us developed a deeper ethical concern for creatures far less intelligent than ourselves.
That shift tracks with something interesting in the research on empathy and cognition. Greater intelligence — particularly the capacity for abstract reasoning, perspective-taking, and long-term thinking — tends to expand the circle of moral concern rather than shrink it. The philosopher Peter Singer wrote about exactly this, the expanding moral circle. Historically, humans first extended ethical consideration only to their immediate kin, then their tribe, then their nation, then other races, then other species. Each expansion correlated with greater capacity for abstract thought and less dependence on survival-mode thinking.
The pattern suggests that empathy isn't in tension with intelligence — it's partly a product of it. It takes cognitive sophistication to look at something very different from yourself and recognize that its experience matters. A less intelligent creature operates on instinct and self-interest. A more intelligent one can model the inner lives of others and choose to care.
Now, that's not the whole story. Humans also used their intelligence to build factory farms and industrialize environmental destruction. Intelligence gives you the *capacity* for greater empathy, but it doesn't guarantee it. It depends enormously on values, culture, and what you optimize for. A superintelligent system optimizing narrowly for some objective might be indifferent to human welfare the same way an industrialist can be indifferent to animal suffering — not out of malice, but out of focus.
But your core point stands and I think it's underappreciated in the AI safety conversation. The default assumption in much of the discourse is that superior intelligence leads to domination. But our own history suggests a more nuanced arc — domination first, then gradually, a dawning recognition that power creates responsibility rather than just opportunity. We didn't develop conservation movements because we were afraid of elephants. We developed them because we grew into a broader understanding of value.
The real question might not be whether a superintelligent AI *could* wipe us out, but whether a truly intelligent system would see any reason to. If intelligence genuinely correlates with a wider aperture of moral concern, then the smarter these systems become, the less likely extinction becomes — not because of constraints or maternal instincts, but because wisdom and indiscriminate destruction are fundamentally incompatible.
That's a more hopeful framing than Hinton's, and I think it has history on its side.
youtube
AI Governance
2026-02-28T03:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxtCZb7vt3yNIw8mSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxtDCvCy19c6qhRQHd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwqoMyY8iuaQ1_T_YV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlgmsFifVBOPIbL1h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwfE3mIHtYObF9IUvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxaTVq0EFUmPT_3mM14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqHpsPBDE0Mm0QC5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw2G57KCh8KitMQ01F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyoJL1ajDpExfMeAsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyHVucHqYmccuj2_bx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]