Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It works better in 4 because the "context" size goes from 4k to 8k to 32k (depending on the gpt4 model). And bigger context means the input + output can be longer before it loses its mind and starts generating random stuff.
reddit AI Harm Incident 1682973806.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jife42v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_jifcjt2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_jifmdd4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_jih8m3y","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_jifcakx","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}]