Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Latter one, but is is not because it is "smart" enough, it is not smart at all since there is no actual mind. It's machine learned language model, which means it has learned through extremely large amounts of human generated text. Humans usually use caps to make the message clearer and seem more important, so it should make sense it also applies to LLM like ChatGPT. Chatgpt is not "programmed" in traditional way, that is why the output is somewhat unpredictable and harder to control. That's also why "jailbreaks", sweet talk, blackmail and manipulation are possible with it. It does not "know" anything, it just makes good imitation of that because of complex neural network with ungodly amounts of relations between "words" (concepts). That's why mistakes are possible and people often say "it's not Google". It is also why restricting it is somewhat difficult.
reddit AI Responsibility 1706959013.0 ♥ 10
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_koprkwa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"rdc_koq2h6s","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_kovwbct","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"rdc_kopvmwd","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_kor6fde","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]