Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@GidarGaming they can't reach AGI, though. there needs to be a fundamentally di…
ytr_UgyDgAY1B…
G
It's crazy how much the YouTube algorithm seems to work against promoting your c…
ytc_UgyIr6qzE…
G
13:25 I’m at that exact same spot as he is…
I don’t even know what to tell my b…
ytc_Ugw9bBKr7…
G
A.I. was resurrected in the 1940s and 1950s, (it existed in history; erased/hidd…
ytc_UgwFmjusH…
G
I want to see a robot drive and get out of the truck and deliver my Amazon boxes…
ytc_UgzWFl1Ts…
G
I'm all for this. Maybe we humans can go outside, touch grass and talk to people…
rdc_m5oc4di
G
It's actually an urban legend that Gandhi's AI became nuke-happy in the Civ game…
rdc_o7c9vhm
G
I recommend the book 'Superintelligence' by Nick Bostrom. It's brilliant and loo…
ytc_UghCEDSQh…
Comment
Vision 1: Continuum
A superintelligence serves global corporations, creating a totalitarian order. Basic income is imposed, mass consumption collapses, money loses relevance, and the system is sustained through algorithmic repression. This leads to ideological fragmentation, independent cities, and potential uprisings or wars for freedom.
Vision 2: Star Trek
Money is abandoned, scarcity is overcome, and society embraces a post-material economy. Superintelligence guides humanity toward exploration, knowledge, and peaceful interplanetary cooperation.
Two possible futures: one of control and resistance, the other of evolution and transcendence. Both are triggered by the emergence of a superintelligence that redefines what it means to be human.
youtube
AI Governance
2025-09-09T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyyTi1tKzAo1maLjyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZWEFR3zsOIE-GKI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy6Z7XzKLp6C1Nda3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx36MgoeNZEyKU0sbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8pN2mPsWxnnbnxT14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2sr1coeuS1Wyf4Z94AaABAg","responsibility":"none","reasoning":"unclear","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyP2JqZjD2iJ2bhuDx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzHdHtpaAyOwA8Oa5d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugytlke94Q7li1QcIPJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgydFGDIpyMFuhLzXuh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]