Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Disabled Writer here. I've done plenty of experimental work with regards to using AI as a tool to potentially help me keep track of my own world building so I can stay consistent. However, I have to say that AI is really, really bad at this. His eyes are green, but suddenly several chapters later they're blue, and oh, look over here, now they're amber. It gets the names of my characters wrong, it can't remember the name of the literal BBEG, it acts as if certain characters know things about my world that they logically wouldn't, it forgets about certain characters' disabilities, it's a disaster. An absolute mess. And pretty much every single one of my characters end up sounding the exact same. Same mannerisms, same voice, same everything. Characters get locked into philosophical discussions when they should know there are time constraints. The protagonist has to explain the plan fifteen times and every single other character still gets confused. It's like, I would have to spend a large amount of time editing and refining the garbage that AI has spit out before I would finally be marginally happy with it. The only good thing that's come of it is that now I have a pretty good idea of how to spot AI writing. So it's not a complete and total waste of time, but it definitely doesn't make it easier to make good quality stories.
youtube Viral AI Reaction 2025-04-23T15:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwSkV7hI-cuAxv1PUZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-4-s-2m9u38I2E3p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgygAxAl46FZXO8m_nF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwMIGoX_mypO4-YfK94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzufvCc_wjaw_1BbP54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxIfmJ7D_UBgbJYHx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx3bbGGbg6m7i9FeAh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwuyapYmHSSEHkOKQp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxj-F2iOzFstJwbxpt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwkqqhOhU4r8QxF-g14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]