Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In my use case, I see it can grab a very good answer to the questions I pose. It is never always right and typically I need to retrain it to get to the results I'm looking for. Too often ChatGPT will hallucinate. I know it does. When it happens, I think how stupid this thing is. As much as I might want to, I never chastise it for being an asshole. I then give it too much authority in my mind over me as if I'm following it rather than it doing what I want it to. Close the window and begin a new chat. It seems to recover in a new session. It's a glorified search engine that uses natural language to communicate and that's all it will ever be. Its too stupid to be anything else
reddit AI Moral Status 1750968788.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mzy0fmf","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mzy0rs6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_mzy0ypp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_mzy1zkz","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_mzy448f","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]