Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think they also need an algorithm the understand basic human philosophy as well. Things aren't as simple as good or bad, but I'll bet the program was written with classifications of certain words. Let's take bad words for example. I don't believe there is such a thing as a word that should not be used. I say "fuck" and "cunt" a lot, but that doesn't make them bad words, just culturally inappropriate to some. Those examples would skew the leanings of the bot.
reddit AI Harm Incident 1503164820.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dlullrk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_dluejq2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_dlvgbup","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"rdc_dlun7f7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_dlucz7i","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"} ]