Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It apparently ate a lot of Reddit and Twitter, and was designed to be very "agreeable". To produce sensible results, it needs feedback from humans, and of course the algorithm then evolves to produce results that the group of "trainers" want to see. It is stupid to think that such a system could in any way be "neutral" or "objective". You could, of course, try to make the imput perfectly representative, but I guess that would just mean that the model would not produce any interesting output on even remotely controversial issues, rendering it useless. There will be bots with all sorts of political leanings, so we have that to look forward to / expect with horror.
reddit AI Harm Incident 1681410377.0 ♥ 54
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jg4arj0","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_jg7rbs0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_jg51mip","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_jg697g7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_jg4kb7k","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]