Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I had an issue lately where I was being frustrated by my inability to write more than a page of fiction. I want to write a novel so badly, but I never seem to get from point A to point B, I just get stuck in between. I saw there were tools that can "help" authors by writing some or all of the novels, and the "author" goes through and cleans it up. Curious, I threw in an idea for a story that I knew I'd never write. What came back was okay, I guess. It wasn't something I'd be interested in but hey, someone would read it, right? As I thought about it more, I realized that the story in my head and the story the AI made were very different. The "AHA!" moment come when I asked myself, "Who's story do I want to tell? If I'm not telling my stories, why am I trying to write novels?" I wish I had a nice way to end this by saying "and so I spent my next week writing non-stop until my first draft was completed! Then, I submitted it to my editor, they loved it, no changes, and it was published right away!" Sadly, life doesn't work like that. I'm still struggling to find my voice, but I'm watching lectures, listening to interviews of other authors, and just talking with the few who give me time out of their schedule. Things that are easy are rarely worth the time to do them.
youtube Viral AI Reaction 2024-11-03T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzUx3kBnFqtmtEdyf54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzaBeJ3mQ2e6WFXx2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZgsWz7tKM9u5-Jv94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgylzlVxJDvC0b3P7Dt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwG5Eydj6HCbkcb8x94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzRW6Ast5xOgNqc-9h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxc_k4IjCoo_F_qRu54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyui0pheb1KMNojwZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJoQMy57CRoTws0fB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzp7XrNacbAkEG_WZx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]