Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I tried using AI to write a story with just a small plot line. It was bad. To be effective I have to write an overall plot, then chapter whise plot then scene wise breakdown at which point I could just go one more level and write it myself, then ask it to generate scene by scene and for every generation i jave upload the files to keep it in memory and context for generating whats next. What I found use full is checking grammatical erros, suggessions for alternate word constructs and so on. Even then to make some parts to come out as i wanted I have to suggest a change for it to implement. Simply a thousand word chapter took 4 hours. If you want users to read very very substandard stories you can use AI but for a good story you need to still write it yourself. I did this whole experiment for a fanfiction where original story was already fleshed out. If you are a discovery writer you are going to self delete yourself with AI as it will give you only slop. AI works if you you use it for some language level editing. But even then if you want professional grade editing you ahve to approach a person. I dont think AI will ever develop to that level with the current level of technology we have. Same for other fields like coding or anything else.
youtube 2025-06-25T19:2… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzbHIdTD6-KI-NniYN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYJTDUSDzphhbwdph4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw3RUB16jefQPgbDVd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzJqDgokwdDZnzEwtB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxn0Y9tppUd0M3iyGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw7nQ-6xlqWze2eGm54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzcpSpbVJpPXKxBVfZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyaSUiPjYIPigOe3o54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugz7QSGSuaDq7ArotC14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugym0_-zy8IihgbmzAN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"})