Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While their method is apparently valid, their sample size is close to zero, and they obviously can't code. I would not consider this work as evidence for such a broad claim as is made. I always wonder why they do not publish their "sensitive" questions. I'd bet on that they're retreating to the very fact of "sensitivity" if challenged. This is *secret research*, and as such not acceptable. Not only must results be published, the experimental setup must be described in detail. Otherwise, nobody will be able to repeat the experiment. This is a real mistake that should lead to this work getting rejected by "authorities" that be, like universities. There are enough challenging questions, for example about compulsive schooling, that can easily lead these LLM's astray. They'll always answer politely and alignedly. In other words: these models cannot "think critically". Also, they obviously don't ask questions. These are key differences to human behaviour, so the developers should now focus on the question what "alignment" is, at all.
reddit AI Harm Incident 1689770181.0 ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jska8os","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jskhqz2","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jskyuig","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_jsmzikt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_jsl7rw2","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}]