Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
because this is relatively new, I want to put my perspective on this, since I don't see people yapping about it often enough. these people making ai, and pushing ai, are trying to end in what I'm going to call 'digital inbreeding', whether intentional or not. AI get all of their sources from human minds, human intentions, human creativity. AI and 'AI culture' as I will call if for the sake of simplicity, is self destructive, in that it consumes, creates, indoctrinates, and then loses things to consume, until everything people makes is horrible, fake slop from an AI that cannot learn without human intervention. And you can't say that it can be prevented, because humans are flawed, and something is guaranteed to get screwed up, and the whole advertisement of AI is to REDUCE productivity, and by circumstance prevention and on my previous topic (an AI that cannot learn without human intervention) there have been studies/experiments done that showcase that AI learning from AI reduces the quality of the original material until it is unrecognizable. It's bad. everyone who sees this, if I worded something poorly, please correct me, and if I said something that's been proven wrong (this is a comment, not a professional document, I'm not about to spend the next hour researching) please correct me. this is all, of course, assuming anyone will see this
youtube Viral AI Reaction 2024-10-23T23:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzxj_8fiym0RKduOHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyU0CfNRxKa7moV6gV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"disapproval"}, {"id":"ytc_UgyprSaaaXriJqPZIVx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxKffCCAJnxAajFqSF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyrD4M-Hsz98YF6ryN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwDyzUmPImuYISDuUZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz-RY5NVyA3qdIwNKJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwK5uebkeTJA_eVjFt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgykIryUzTSebtlQi4V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwVm9BWirju30i6FC94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]