Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe AI will free us and we will actually have time to do what we really want t…
ytc_UgzRxWDMA…
G
To the dude with the top comment ---> Yes, why not automate art? Art as in paint…
ytc_UgzohnQrm…
G
@cholovekno_ne_pauk propaganda, fake videos of real people, fake nsfw of real pe…
ytr_UgwVbDbBT…
G
Jesus died for AI so that means we are seeing the Spirit of God moving! https://…
ytc_UgxdomAxd…
G
This is creepy. They're turning real humans into robots and real robots into rea…
ytc_UgyM53wiF…
G
AI is creepy when it shows emotion. It has even sounded like I hurt its feelings…
ytc_UgyrxErWB…
G
That's the thing about capitalism. Under capitalism, the means of production are…
ytc_UgzSurXm4…
G
Lol 😂 😂😂 , it is so funny 😁 to me that anyone would trust a driverless car 🚗 🤣 a…
ytc_Ugys1f9bC…
Comment
Statement:
Based on Dr. Roman Yampolskiy’s remarks, we are accelerating toward artificial general intelligence (AGI) within a few years, with superintelligence likely to follow. Capability is compounding exponentially while safety advances only linearly; the gap is widening. If present trends continue, the capacity to automate “most humans in most occupations” emerges quickly, driving unprecedented unemployment not because deployment is guaranteed on day one, but because the technical ability to replace cognitive and, soon after, physical labor will exist. Crucially, we do not know how to make such systems reliably safe or aligned. Present “guardrails” are patches that can be circumvented; they do not constitute control. The argument that we can simply “turn it off” is illusory in a world of distributed agents able to anticipate and route around human interventions.
Given non-trivial extinction pathways—including the misuse of advanced systems to design novel biological threats—and the absence of peer-reviewed proofs of perpetual control, racing to build superintelligence is a civilization-level gamble imposed on an uninformed public without meaningful consent. The responsible course is to redirect talent and capital toward narrowly scoped, verifiably bounded AI that demonstrably serves human purposes (e.g., disease cures, infrastructure safety) while establishing clear red lines against creating autonomous, self-improving general agents. This is not a call to halt all AI; it is a call to halt the small subset of development that aims to produce uncontrollable, open-ended optimizers.
Accordingly, I support an immediate, coordinated pivot by companies, investors, researchers, and governments away from AGI/superintelligence and toward narrow, auditable systems; transparent safety claims subject to peer review; and policies that require affirmative, informed public consent for any development with plausible civilization-scale downside. Until credible, testable evidence shows we can retain durable control over superintelligent systems, proceeding to build them is unjustifiable. We should choose abundance without annihilation—useful tools without creating a god we cannot govern.
youtube
AI Governance
2025-09-04T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxeMx_YQf1RfjkxreF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwNOVCi9bRBHoPAPax4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxc6G97IhHSFebBc9B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyA496u5eY8Mxo40654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5pRLI7Nm_bnQ0T9l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw09Ntqe4Y5POcS_Mt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPGzwGI_CI7NQThOd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxbwU1hz68Sda9vCTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxrImX-jiN96vo67U14AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_WscFTGnPKDUMD0F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]