Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you want shitty code on 3.5 all you have to do is ask for any kind of code. And shitty could mean malicious so it might have had that in its background context as it tried to answer a vague prompt. I am also a dev and when I ask for code I am not vague. I am very specific, half the time something is still off but usually it's corrected in the next prompt (on gpt4) I don't even f with 3.5 personally it's a waste of time for me.
reddit AI Responsibility 1690573356.0 ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jtsxyha","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jtrhqcg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_jtqzeze","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_jtuidy1","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"rdc_jtv2jes","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]