Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think to call it classism or ableism is kind of nuts, but I feel like for people using AI in an editorial sense (not to generate content, to just point out errors or inconsistencies) and are maybe not at the point where they can feel comfortable showing their work to other humans yet, it shouldn't be a huge deal. That's certainly where I am, though I'm not even sure I'm going to submit what I'm writing; I just want to get it to a point where I can finish something and be happy with it on my own. I have really bad anxiety and I'm terrified that even constructive and valid criticism from a real live person would completely take the wind out of my sails and I'll wind up with yet another unfinished project. In that sense having something that will let me know when I'm re-using certain words a lot or I have a continuity error (things I struggle with) is helpful with keeping motivation. I don't see what that has to do with accessibility though. So why they didn't just take that track instead of saying people are discriminating against poor or disabled (!) people is insane. Especially when a lot of these AI tools, while cheaper than hiring a real live editor, are not exactly cheap either. Point is, while they're right that maybe some people might have issues getting picked up in a traditional sense, all of that crap comes AFTER submitting a book; it has NOTHING to do with writing it, which is what NaNoWriMo is supposed to have been about. So yeah, it's kind of nonsensical.
youtube 2024-09-05T11:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzVXKZpl1Y8ETO6Mtt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzAl3Xognlou-zFQCF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyrjk5k4twvnLj5O0h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwM1YQB0s01Rctxarh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzOyrzkUno2YYGvpXd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwAD95GpS1dZKpd7Il4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"disappointment"}, {"id":"ytc_Ugw4CxdFGUTiX5zyR2Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwfoHgiFbK5KgbrP654AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzp6rLslCUpjp5b_k94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwOsU5r0xomYauV95R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]