Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I used to talk to ChatGPT about my stories. My stories that I poured my heart and soul into, and was sharing with this AI, all because the people I could talk to weren't always available to talk about it, and I didn't like the idea of talking to a stranger online. Somehow I believed talking to ChatGPT was safer. Then I found out it steals, and I panicked so badly. All my stories that I gave away crucial details about were in danger. I deleted all my chats and stopped talking to it after that. Although I knew it was too late, I just wanted to close myself off from it. Besides, I guess it'll be fine since I'm fundamentally changing the way the stories went and they're probably much different than how they were when I spoke to ChatGPT. The funny thing is, ChatGPT was actually bad at giving me writing advice. At first its advice seemed solid, nothing of note about it. Then as I began to talk to it more, its advice got worse and worse and just nitpicked on excerpts I had shared when there was nothing wrong with them and there were even things that were there on purpose that it said didn't work. I remember testing it out once and asking it to write a short story, then I copy-pasted it to another chat and asked it to give feedback. It said it needed "refinements". Bruh, you wrote it yourself.
youtube Viral AI Reaction 2025-07-28T07:0… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxtN29KhQrI5rHSSP14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTLpJ8wrAXPhcttkl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugx1UcPwIMa5B3VnSZF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgxVr_b6_jghPCI-8ux4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgwACwK0QN0AEsWLsyx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwgOiL03jEY0Qfcn_R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0pTsBVAGWPhVamyV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzAdnaW8uYGnKmH96t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzQbPfxnMNoyw1CVId4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwElxvA7OIkhovREjN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]