Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the problem with Ai is half-solved if we make models ethical. Best gen AI would be one, which is created by the artist themsleves, when the programming of that machine becomes a part of the artistic process, the bot becomes an artwork in and of itself. That would be the best AI art. It would be, perhaps, even more admireable than "pick up a pensil" art, because it would require not just understanding of art, but understanding of programming. Under that tier would be art made with ethically trained ai. If that tiktock series would have been made with an ai that was ethically sourced, I don't see much problem with it. It's not far different from using royalty-free music for a project. You just cut costs somwhere you don't want to waste time - if you want to focus your efforts elsewhere, without making the quality too low. And the good thing about that second tier, is that if a person cares enough about what they are making, they will always prefer a human artist, because they too will want that weight and that value that comes with human-made stuff. They will want something specific, that cannot quite be replicated by a fairly simplistic algorithm. The same way how a game-jam demo, when turning into a full-on game, replaces royalty-free music with original compositions, a piece of art would seek to replace most of those ai-made placeholders with something human-made. You, as an artist, feel pride, when you take the long road, difficult road, with your project. But you do that only with projects you truly care about. With other projects, you don't really mind some sloppiness around the edges.
youtube Viral AI Reaction 2025-08-30T01:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzuYBjvCXns3HgepSB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyzV52a4RO43OkMZ_F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy-Ot3S56XCsYQCXN54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwmhhHJfRixsdqxXX94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyo-oESmZakJfotIah4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgwNO4il1El9iDSHy_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCzWXxqt77Eo0jy8p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgziwfRF8F0UH1JuWiJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzKN8aDNrwpRQ4D0rB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzxeG1vMjHykSfyPa94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]