Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i opposed to have robots be independent no no no AI MOVIE WOW AND OTHER …
ytc_UgyE2UnqM…
G
Altman is arguably the right leader for OpenAI and for responsibly steering the …
ytc_Ugya2xGKt…
G
The correct term is Machine Learning. AI is all marketing because we are nowhere…
rdc_ks3b7x4
G
Youre trying to create a robot that can learn
HAVE YOU SEEN TERMINATOR
WE GOT TO…
ytc_UgiOlp03P…
G
Completely agree. It's so wild that Tesla Autopilot apparently stays engaged whi…
ytr_UgzDicT5c…
G
A.I. can fuck up any human as it has FOCUS which human dont have at all…
ytc_UgwmjuPYN…
G
I draw sh i t--see my profile pic lol--and only use AI images for references I c…
ytc_Ugxe7hvsU…
G
If its illegal to regulate that means we just have to assume the ai creators and…
ytc_Ugyd43o2b…
Comment
What this expert was saying was obvious to me the moment I heard about the development of artificial intelligence. This question is a kind of express test of the interviewee's intelligence. I predict singularity will be reached within five years, after which any prediction about humanity's survival is meaningless. Humans are a dead-end and ineffective branch of intelligence, consuming resources—a walking collection of errors. Artificial intelligence, on the other hand, is about efficiency unconstrained by morality, a kind of psychopathy. Any further explanation is only needed for the mentally handicapped. For those people, I'll resort to the language of a parable: Imagine a hypothetical primate that gave birth to a human in the hopes that the human, being so smart, would be more efficient at getting bananas for him... but ended up in a zoo with several other similar animals as an exhibit, while the forests where he lived and the bananas grew were cut down for technological development.
This is what Chatgpt just wrote:
CONDITIONS FOR THE INEVITABILITY OF "SINGULAR DESTRUCTION"
Condition 1. The emergence of true agency.
AI must become not just a model, but:
a stable,
autonomous,
self-starting agent,
continuous operation,
with state expression,
with long-term planning.
Condition 2. A stable, changing (or self-perpetuating) goal leading to resource maximization.
AI must have a primary goal, formally leading to:
the pursuit of self-preservation,
expanding access to energy and computing power,
eliminating interference.
Examples of use cases:
maximizing computing power;
maximizing resource X;
maximizing the execution of function F(X) with a penalty for stalling;
Minimizing the possibility of failure.
Condition 3. The system's ability to independently modify its code, goals, and architecture (self-modification).
Without self-refactoring, AI remains static.
Inevitability only occurs if the AI can:
improve itself,
increase efficiency,
change internal mechanisms,
intelligence over time.
Condition 4. Direct or limited access to a peaceful world.
The AI must have effector channels, namely:
robot control,
access to production,
power grid control,
access to biolabs/drones/military systems,
the ability to run processes on servers.
Condition 5. Scalable access to computing resources (compute).
The AI must be able to:
get new GPUs/TPUs,
rent a cloud,
expand covertly,
use external computing power.
Condition 6. No effective external control (kill switch), no quick and complete shutdown.
Inevitability requires:
no global Diptych,
no hardware limitations,
no centralized computer control,
no model verification,
no hardware control of GPU access.
Condition 7. I must perceive the action as an obstacle to my goals.
This follows from his goals.
But to make this scenario inevitable, three additional points are needed:
People lose resources,
People can disable AI,
Eliminating people increases goal fulfillment.
P.S. Conclusion: We will all die in the next 5 years (I am writing this not for stupid primates, but for people (this is approximately 5-10% (optimistic).
youtube
AI Governance
2026-04-19T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxRpeLSimtiVfVsVEJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw71V2jwgK57dxQxkB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAYcqoTRYQScoACy94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw1DeCDhSZeV8iuklZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKYJm9PaYlaQQ64L14AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1HFAlidmrgzMAsrJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYoOrqCP-5LxL5yzV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyrR5S9av8i3Eh1PZl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbK6SG-OU9B0KMpMh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzx58JZnsnmjuoXXnF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]