Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only robot that destroys human lives is the Mobile Phone. He governs our liv…
ytc_UgzqbRv9-…
G
How AI takes over, part 1. Do we really want to sacrifice all this land, energy,…
ytc_UgwtjKG-n…
G
@1:26 "Over 25% of our code was AI-generated".... we could tell.... every new up…
ytc_UgyGQ8GMZ…
G
You make it sound like this isn't coming, soon. Yes, maybe this specific example…
ytc_UgyGNW-jC…
G
Just as this question to chatgpt “in growing population, with all the automation…
ytc_Ugydn0WHl…
G
Well all I can said is that you already 50 some years old and I am going to see…
ytc_Ugyprv4Ya…
G
No the real problem is we're using AI for the scenarios just look at freaking mi…
ytc_Ugw-v_5o5…
G
1. These programs dont learn, and are not inspired. Inspiration is defined as th…
ytr_Ugy2QUOLC…
Comment
This is pure fiction, not serious research or prediction. It makes so many huge assumptions:
1) An AI that can improve itself can be achieved just by scaling the existing model. There's no evidence of this.
2) AI only needs to get better at AI research. Nope, AI research is not a monolithic narrow field. It literally includes everything that humans (and other organisms) do. So it literally needs to get better at everything.
3) The opposite of linear means exponential. Nope, non-linear can also mean an S curve (and more), where the first half looks like an exponential curve, but the second half leads to a plateau. Every tech or biological advancement follows an S curve so far.
Just to name a few in the first section of the AI 2027 publication. There are way more hand-waving assumptions later on too.
youtube
AI Governance
2025-08-29T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwrlhcuN2oR-XvmDIx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-PWV_49ciTwzosi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyLM8qKHIRajkzDr6V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCaHy5HoBDhlwKA3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw0yNhXMsYZx7RQlb94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlPNd7JlBx993oMeJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx79D_xrmWh9czScYl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyKB2Pez5y32DOnjCt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJebW_hnJ8tFnG2ot4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzavKsZ8ozn5F5iqOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]