Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is the part they need to work on most. I don't care if new models are smarter; they're already so smart that as a layman I'm not limited by how "smart" it is. I just want them to reduce hallucinations. I've been using Gemini to plan my weekly running routine to get faster; I fed it some data after a run today and it basically said "That's a good job considering you were on tired legs after your run yesterday" and I had to remind it "Huh? I haven't run since Saturday" at which point it admitted it was thinking "yesterday" was actually last Wednesday. I've actually had *two* runs since then (not including today) and I fed it the data on both, so it was aware of them. It makes me wonder how many times it's hallucinating things that I'm not catching. That said, it's not like if I hired a human coach it couldn't have made a similar mistake, so I'm not *super* concerned, but it is something I wish they would focus on.
reddit AI Moral Status 1765319692.0 ♥ 7
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nt6usbo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nt6njvp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_nt6wlv2","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nt6qx0h","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nt6jk1j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]