Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bill Gates in 2007ish said AI would be at the same level of humans by 2020 and t…
ytc_UgyHyDhbr…
G
So I’m guessing this will affect those of us who use ai art for non offensive fu…
ytc_UgwIkG9Es…
G
That's all large language models do. They weigh up the inputs and predict the ou…
ytr_Ugw0HHoay…
G
I’d rather have my Amazon package come one day later than have a robot truck tak…
ytc_UgxIpF7cp…
G
I'm still scared of the "ai is inevitable" comment, it just feels like i keep ac…
ytc_UgwhvklWk…
G
Nobody mentioned how the department of war demanded dirty access to Claude ai!!!…
ytc_Ugw4vyFFE…
G
If we are in simulation as he said, then it means AI already took over so it is …
ytc_UgxJ9qQM7…
G
Well said—I was feeling very tweaked that they kept kicking the “philosophy” con…
ytr_Ugw5uHc1b…
Comment
I am currently pursuing a PHD in Mathematics and Intelligent Systems, and this is my opinion.
For decades, AI was dismissed as fundamentally incapable of real intelligence. Neural networks were called untrainable, scaling was said to be impossible, and language was believed to require human-crafted rules. These claims collapsed not because of new philosophical insights, but because compute, data, and optimization finally reached critical mass.
Real breakthroughs removed every major objection:
- AlexNet (2012) proved deep learning works at scale.
- AlphaGo (2016) defeated human intuition in Go.
- AlphaZero (2017) learned superhuman strategies from scratch, without human knowledge.
- AlphaFold (2020) solved a 50-year scientific problem in biology.
Large Language Models developed reasoning, coding, and abstraction without being explicitly taught.
Each milestone broke a claim of “impossibility.”
AGI enters the picture naturally, not magically.
AGI was never blocked by a missing theory of mind, it was blocked by insufficient scale. Modern models already show:
- Transfer across domains
- Tool use and planning
- Self-improvement via feedback
- Emergent reasoning abilities
These are proto-AGI traits, appearing gradually rather than as a sudden leap.
The historical pattern is clear:
Every time AI reached human-exclusive territory, skeptics said it didn’t count.
Then AI moved further.
AGI is not a switch—it is a continuum of increasing generality. The question is no longer if systems will become broadly intelligent, but how fast, how controlled, and who benefits.
Ironically, many who once claimed “AGI is impossible” now argue “AGI is dangerous.”
The contradiction is telling.
The future of AI will not be decided by denial, but by engineering, alignment, and governance. History already showed that disbelief is not a defense against progress.
youtube
2025-12-31T13:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzr_6LHBE5r7K7NylB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7wrgt_U4OXaGbO354AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_Ugxjsx-XySkhMCvQi5V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxHU4-ZXuCfQf--2CF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw7ePNyfIC5HTbOAER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzfUyOSRao2JXJfWQF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxJ8stMvrfIdl-ow_J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoFJWYK_Xp538Rs9d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwy-VqpO0EVPwIIZ9l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2OoeXEw7wTEtSMrh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]