Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Computation does not equal cogitation. No matter how quickly AI does math and ho…
ytc_Ugyi82_I6…
G
It's sharing this so it gets more views f**k AI I might try to redraw that just …
ytc_Ugwl5qHeB…
G
Analyse the black box recordings for god's sake, raw sensor data, actuator comma…
ytc_Ugwo3xZi5…
G
Wheres the little robot middle finger its supposed to flip when it gets cut off?…
ytc_UgyZG75TO…
G
AI could literally make it so they have unlimited power... Nicola Tesla found ou…
ytr_UgxitxKRO…
G
Why did Goggle abruptly shut down its most advanced quantum computing project (S…
ytc_UgxFoSnaA…
G
To me AI is just a cool trick it's not magic it's not the future just a product …
ytc_UgzFePHab…
G
What you said, does not make sense. I can explain how AI works, it is not comple…
ytc_Ugzq9EgFs…
Comment
This is a slightly modified version of the Battlestar Galactica stories, which is effectively Mormonism.
Realistically, we're not even close to General AI; it’s likely to take at least 50 years, but almost certainly decades longer than that.
In the 1970s, Dr. Christopher Evans made a load of predictions for the year 2000. None of them came true at the time, BUT they have become true now. He was 15–25 years too early. If we consider 2025 as the year the last of his predictions came true, it shows his timing was simply off.
It is unlikely a General AI is possible, but personally, I wouldn't bet money on that. Processors can only get so small, so there's a real chance that we hit a brick wall and technology progression effectively stops, or at least slows to a snail's pace. Or if, stable quantum computers do become a reality; at this point, there will be an unimaginable amount of positive progression, making the future completely unpredictable.
I say "completely unpredictable" if quantum computers do become truly stable and reliable, but that isn't strictly true. We know there would be great progressions in medical science, for example.
youtube
AI Governance
2026-02-27T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_BxWIzW48C8tOHlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxlIwijgiYmoUYe0VF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwU9brjyXaQQB8chgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwGCn0Yy-d3VhSR-mB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw7Rq7fChMg0dtZZFV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxFYhSLIVkY6Dlu3oh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_PFy4UVHHuKdFu5l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPylUV2bS3_0LUCg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws-9lw50vSnMNX15t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw4pNHPNMzXrF4T7Wp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]