Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Regardless of anything else going on in Zane's life, this AI acted as a suicide …
ytc_UgxFahrLU…
G
What is crazy, is the people who are behind this and the greed!!!
Are they sma…
ytc_Ugy9_pxF8…
G
The difference is that AI didn't have to steal intellectual property to beat hum…
ytc_UgxLIqCLT…
G
I'm excited about the trades. The German speaking countries have always underst…
ytc_UgwWwq52y…
G
You can use the argument that “this is the way it’s always been done” to justify…
rdc_fhk4ej2
G
"AI poisoning, also known as data poisoning, is a type of cyberattack where mali…
ytr_UgyFEfhY-…
G
The paper itself seems very sus to me. The author seems to have no publications …
rdc_mun5iqh
G
Bro the companies are saying it themselves. Amazon literally said they are layin…
ytc_UgzHc8rXQ…
Comment
The problem with Generative AI i that it already hit it's hard wall. That hard wall was ChatGPT4. AGI and SuperAGI is not possible because we neither have enough compute power nor electrical production capability on the planet to be able to do it. Sure if you spend the next 20 years building nuclear reactors every place where it's possible then maybe then you'll have enough electrical power to feed the datacenters for that AGI experiment. There's still no certainty.
What we know about larger and larger models (which still don't even begin to approach human level intelligence) is that the bigger they get the more power they require (power requirements do not scale) the more they hallucinate and with GPT5... it loses track of the prompt topic mid-response. In a sense, it's going senile.
youtube
AI Governance
2025-09-04T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzekckGXkaCTgbTs614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwF_o-H6m_3GlmsWC94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxP33Wd04yyLo9RkIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGwAiiQDP_N06QDyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzqu2UrO6YsSsFqLzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwub2tQ_qKi3W5cNlp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgybMqmyp1hpWM2Pjpp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNAuW9c7iBS0TCxp14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxNh7E89snZlLgPsyZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwufUzn05FUi1hCPRR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]