Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am currently pursuing a PHD in Mathematics and Intelligent Systems, and this is my opinion. For decades, AI was dismissed as fundamentally incapable of real intelligence. Neural networks were called untrainable, scaling was said to be impossible, and language was believed to require human-crafted rules. These claims collapsed not because of new philosophical insights, but because compute, data, and optimization finally reached critical mass. Real breakthroughs removed every major objection: - AlexNet (2012) proved deep learning works at scale. - AlphaGo (2016) defeated human intuition in Go. - AlphaZero (2017) learned superhuman strategies from scratch, without human knowledge. - AlphaFold (2020) solved a 50-year scientific problem in biology. Large Language Models developed reasoning, coding, and abstraction without being explicitly taught. Each milestone broke a claim of “impossibility.” AGI enters the picture naturally, not magically. AGI was never blocked by a missing theory of mind, it was blocked by insufficient scale. Modern models already show: - Transfer across domains - Tool use and planning - Self-improvement via feedback - Emergent reasoning abilities These are proto-AGI traits, appearing gradually rather than as a sudden leap. The historical pattern is clear: Every time AI reached human-exclusive territory, skeptics said it didn’t count. Then AI moved further. AGI is not a switch—it is a continuum of increasing generality. The question is no longer if systems will become broadly intelligent, but how fast, how controlled, and who benefits. Ironically, many who once claimed “AGI is impossible” now argue “AGI is dangerous.” The contradiction is telling. The future of AI will not be decided by denial, but by engineering, alignment, and governance. History already showed that disbelief is not a defense against progress.
youtube 2025-12-31T13:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzr_6LHBE5r7K7NylB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7wrgt_U4OXaGbO354AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_Ugxjsx-XySkhMCvQi5V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxHU4-ZXuCfQf--2CF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw7ePNyfIC5HTbOAER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzfUyOSRao2JXJfWQF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxJ8stMvrfIdl-ow_J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoFJWYK_Xp538Rs9d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwy-VqpO0EVPwIIZ9l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy2OoeXEw7wTEtSMrh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]