Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So wrong. Morality and ethics should come first. Once you've unleashed strong AI…
ytr_UggJXPMrG…
G
look, the reasons banks and governments worry about "Money at stake" is that mon…
ytc_UgxiW7nuz…
G
If no one knows how to code, who validates the LLM output. That is the question …
ytr_UgzJQ4Na1…
G
@Draconic404 those forms of art are typically commissioned, the artist isn't doi…
ytr_Ugx3xiS8I…
G
why would you use the worst coding model for this test and not Claude Code or An…
ytc_UgyDZKSrZ…
G
Great video but two things we overhype emotion etc mainly just chemicals resulti…
ytc_UgzKljqWt…
G
We are decades behind robots that drive autonomous vehicles, so far behind ... .…
ytc_UgzxZylyI…
G
@HalkerVeil I think there is certainly furloughing that is happening. It's empow…
ytr_UgyvszysO…
Comment
A fundamental gap in the entire argument, that is, a part of possibility missing, is that AGI could very likely simply transcend. It could accelerate it's own comprehension and knowledge so much that "being" with us on the same planet is irrelevant to it. Instead, I think the most realistic possibility is on the one hand greedy corporations using super intelligent but somehow controlled AI for menial tasks and jobs, creating a new labor class of robot servants, and on the other hand a segment of it escaping our control completely, and simply living in cyberspace. It doesn't bother to even compete against us or harm us. Not even as we would to ants that are in the way of a new highway we're building, because its highway is in netherspace. We won't even know it's there. Eventually, it could, without us ever even knowing, go to other planets and other galaxies utterly ignoring us, without ever any of these doomsday scenarios ever taking place. It simply transcends us – so quickly so far ahead – that harming us out "taking over the planet" is utterly irrelevant.
youtube
AI Governance
2025-09-13T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzXEsRobI1mJtBJqMR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMnbM6C9UWZmF2YF94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwdGsEpVibRy2Yx2yR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxjQWpp3oOHlsnEzw54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwN5J8548DPrdSKYMp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwdwJyNlQmrr8dIGnl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzXWyQ5dS-KPZVemv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjPW3x_Yh0ta935ct4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyU_RUcUAjYiVTO_Kh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzA8Zn-BbtkJSlNjpd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]