Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issue is that the quality of the response varies wildly over time. I had Claude build an import template with 17k formulas and a few weeks ago it one shotted most of it. Yesterday I needed it to make a small correction and it literally deleted a whole sheet at it said the file was "corrupted" (it wasn't) and dropped most of the lookups and formulas. Used up my whole session. The next session I tried to have it resolve the issue and it just used up the whole session debugging and didn't even produce any output. So I got chatgpt to do it and white it's not perfect at least I can iterate until it's fine. Claude is just so inconsistent and charging for this kind of usage is a scam IMO.
reddit Viral AI Reaction 1777071978.0 ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_oi3pd0a","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_oi373hm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_oi303kd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_oi3f56p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_oi1806t","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}]