Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How interesting... Clearly, something important is happening, akin to the Industrial Revolution. And it was obvious that it would begin. AI is young, and many of your arguments against it are merely arguments for the worthlessness of modern AI relative to its future versions. I worked with many now-primitive models, before any good ones even emerged. And back then, there were those who said this would never happen. And what about now? I can practically sense the junction of eras. We need to be prepared for it to become better than humans in most skills. The only question is when, in 20 years or 200. Modern AI learns from human skills, having no understanding of the real world or the purpose of buttons... The overall goal is to teach it to learn like humans. Not just from other people's skills, but from everything else that exists in this world. And even if it works... it won't be possible to say that "AI takes away the idea from art without adding anything of its own." People are accustomed to thinking in absolute terms, either "this" or "that." The reality is a gradient. The debate about whether using artists' images to train AI is theft is nothing more than an attempt to put a full stop on this gradient. And given our biases... obviously, everyone will put a full stop where they personally want, based on their feelings on the matter. And I'm just as trapped in this thinking... knowing this, I'm left putting my full stops in the center. It's funny to watch AI opponents screaming about the harm of data poisoning, just as it's funny to watch artists screaming that their art is being stolen. Both only spur progress simply by discussing the topic. That's wonderful.
youtube Viral AI Reaction 2025-11-03T22:3… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugyf58Q9gjUJ-dAdIn14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyGp1o_z91XO5YWNUJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzEYW_fnjcTFLZDCAV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzdJTHedddw1Iq53KV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxLl-WmHRUIKAso__F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwFxJrx77lk3w9woV94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxImMONisZqh7CBzbx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzcVVSOFhzOPNjimGp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx-zcW2BvBIWVdVNVp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwJZHJ4DTei1KN1VJB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"})