Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One argument I don't see made very often is "and then what?". Let's say you replace artists with AI and that it somehow does a perfect job. Then what? It needed the entirety of human art, writing, animation, whatever to train itself and you've now created a situation where no new art is created that is not the result of that. No new art, just rearrangements of the old art. Follow this for long enough and the AI model kills itself eventually. In theory it should be able to create infinite combinations, sure, but in practice I'd give it two Marvel phases before people get tired of any particular thing. If your purpose as a developer of these tools is to replace the resource that makes your tool possible to begin with, I am sorry to say, you are an absolute moron. You're shooting yourself in the foot. And sure, long term viability never killed short term plans that were too profitable, but I don't understand what the plan is here other than "try to make money now, hope the problem doesn't become obvious too soon for us to cash in on it". Other than that I think you guys nailed it when you said it's the process, not the end product, that is important. That's what's valuable and the cynicism that leads us to believe only the end result is the point needs to be stopped. Journey before destination, right?
youtube 2025-06-28T21:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwPTP2jk9VHuZ9MwuZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"sadness"}, {"id":"ytc_Ugzpw9dNAXUBoaHeHf54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyycDf76w3K5i1kkxN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwUmdo7MLr5pOpKHSh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQeKl0ZWbBJ23MjV14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwiu7V3E1ofhO0QajB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwgsD5rWCD11M819tl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxAeW3_iOFM7sjxlPh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz6AZ_ozRnMll8UrWR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwNLDPgHN1WHIowZzB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]