Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thing is models are starting to know when they hallucinate by themselves and are fixing it these days. I see it in Gemini a lot where it says something but then adds some explainer on why that's incorrect and shouldnt have said it and autocorrects itself. Am hoping couple more iterations, it wouldnt output the incorrect stuff but fixes it before it we see it print in the first place
reddit AI Moral Status 1765316930.0 ♥ 10
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nt6usbo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nt6njvp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_nt6wlv2","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nt6qx0h","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nt6jk1j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]