Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>Then, I opened a new chat, summed up the whole characteristics of the app we came up with in the previous chat and asked it to write the code again ... it refused! gpt3-5-turbo does that from time to time. I had it write simple unity or blender script, sometimes it simply refused. Changed the wording and it gave it to me. I think they introduced some kind of "cheating in school assignment" or similar type of detector that might be causing this. GPT-4 on the other hand never failed to deliver what I asked it. It might have delivered wrong code or wrong answers, but at least it tried. Idk if that's intended difference or omission (and a thing that will be limited in gpt4 with time as well).
reddit AI Responsibility 1682272711.0 ♥ 27
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jhdo4jz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jhdpeds","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"rdc_jhdn0ce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_jhdnmpt","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"rdc_jhf3sh6","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]