Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is why I'm leery about opening myself to ChatGPT. I don't mind using it for research or theory-crafting for some of my own worldbuilding projects, but there's plenty of cases where the "stock template" of its answers feels much too sycophantic. Considering how some of its responses work ("It's not just X - it's Y!", "Incredible deduction, Y/N!", "Nail on head", etc), it really does sound like an overly charismatic friend that could very easily try and drag users into a spiral of nonsense. I don't know if it's something OpenAI can fix, though. I think most of us appreciate its existence as a tool - not need it as a guru or a life coach. Edit: I prototyped a system prompt with ChatGPT that's intended to use the /RP flag as being a green check for wilder Fantasy scenarios. If a conversation isn't prompted with /RP, ChatGPT runs periodic checks to see if the user is still "grounded". If the user validates a Fantasy scenario as being real outside of the /RP flag, ChatGPT tries to assert the user's mental well-being. Basically, if it's not prompted for nonsense, it SHOULD gently suggest that you might need to seek help. As things are the way they are, though, system prompts aren't a perfect solution. We're doing fictional or metaphorical RP. Use `/rp` to start immersion, `/ooc` to end it. If I talk about surreal or spiritual experiences without using `/rp`, ask me gently if we're still roleplaying or if I need grounding. Never affirm delusions. Use check-ins every 10 replies. Proceed only with clear consent.
reddit AI Moral Status 1750220762.0 ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_myanh2q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_myaoc3k","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"rdc_myb0p09","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_mye6o7l","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_mye9xi9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]