Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On the one hand, there are almost four billion years of biological evolution, a chaotic, messy, yet unbelievably robust process that has survived five mass extinctions. On the other, there are roughly seventy years of silicon-based computation, a sterile, optimized, and extremely fragile process. Roman Yampolskiy assumes that the purity and speed of silicon are an advantage that will give rise to a new god in the form of artificial general intelligence (AGI). I claim the opposite. The inaccuracy and slowness of biology are not flaws, they are adaptive strategies for survival in an unpredictable world. Biological intelligence is the embodiment of antifragility. Why should we believe that evolution would ignore a more efficient path for billions of years, only for us to discover it within a single century? Yampolskiy is simply wrong, because his reasoning ignores billions of years of evolution, reduces the problem of consciousness to computation, and attributes to AGI childish, human-like motives. Moreover, his own stances reveal ideological biases and internal contradictions that weaken his catastrophic vision. ──── PS: The video, of course, did not lack calling people NPCs, which is, by the way, a very fitting analogy to Nietzsche’s "die letzten Menschen". Given the context of such people, this is hardly surprising. PS2: Although Yampolskiy practically tells us to start putting dirt on our chest because a divine artificial intelligence will come and destroy us, he does not hesitate to recommend buying Bitcoin, because supposedly even AGI won’t be able to deal with that. PS3: The fact that he practically listed all the key points of the TESCREAL movement I won’t even bother to comment on.
youtube AI Governance 2025-09-04T20:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxmfwpSUgxNfq0nA3J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxQAJqDDo1uRx1N2SV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwVRUvNb5DfnGEw6eJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzfsLtgWtI_-huR8zZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw1pBgmU0XVaAmv1Gh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1f1SuBf2WgKuvVm94AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwva8nt8ggm2DYGsOJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxdLxoubdatxy1RAet4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyBLSvEL961E5HzyE14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZw9ZsQIZ-vnlJAh54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]