Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Same but worse. Atleast Humans can explain/justify their assumptions. Also humans can correct their wrong assumptions - "Well I thought this was fine but now I see the error in my ways". AI kind of self corrects but not in a sticky sense - just like an RNN (which is what chain of thought uses). For all that GPT does so well, it still exhibits the same shortcomings of classic ML.
reddit AI Jobs 1754662560.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n7ls82o","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"rdc_n7hk0i4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"frustration"}, {"id":"rdc_n7i0nqt","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_n7ie6q9","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_n7huqt9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"frustration"} ]