Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Several years ago, a friend of mine showed me what ChatGPT can do with writing, because I was not familiar with it. For his example, he asked it to write a short story about Santa going on vacation. It seemed pretty decent, actually. But then. He asked me to give him a name of an author whose style I would recognize. Of course I said Brandon, and when prompted the story was rewritten in his voice. I was both impressed and terrified at how much it actually DID sound like Sanderson. I decided right there and then that I whole-heartedly disagree with AI use in any kind of writing. I don't want it writing my emails for me, I don't want it to autocorrect my sentences, and I definitely don't want it writing my stories! To be fair, I didn't read the whole story. If I had, it probably would've given itself away as an AI story. As a writer, I honestly don't get the point of AI stories. I mean, I write because *I* love to write. I love the brainstorming and the storytelling and the process in general! Why would I want a computer to take that away from me? Maybe if I needed a ghostwriter it would make sense. But the ethics of it are still too gray and I stay away from the writing aspect of it. I use it on very rare occasion for concept art. More out of curiosity than any real need, without intention of using it other than to continue sparking my imagination. AI may have its place, but I personally don't feel it belongs in the world of writing.
youtube 2025-09-08T23:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwk6vNf5Hw4WGB_dPh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzjIRXDWb_x9QBK3Qd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz5SdrJ8aIZmPbyEOV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZY9saTmVytQJ0cXZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugyx5F399x0lQtcu9xB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyN7HX-s1-_NMvvpKx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwB_2xnNO4_RFJ4zVh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwWdegnod86O_D4ZXB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyORmfxo2qi49xDMrR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugytv9a5xSC_QzjzhJt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]