Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As someone who understands how these models work I feel the need to interject and say that moralizing it is misleading - these ChatBots aren't explicitly programmed to do anything in particular, they just mould themselves to the training data (which in this case will be a vast amount of info) and then pseudo-randomly generate responses. This "AI" doesn't have intentions, manipulate, have malicious feelings, etc, it's just a kind of mimic. The proper charge for the creators if anything is negligence, since this is obviously still horrible. I'm not sure how one might completely avoid these kinds of outcomes though, since the generated responses are so inherently stochastic - brute force approaches, like just saying "never respond to anything with these keywords", or some basic second guessing ("is the thing you just said horrible") would help but would probably not be foolproof. So as long as they are to be used at all this kind of thing will probably always be a risk. Otherwise educating the public better would probably be useful - if people understand that these ChatBots aren't actually HAL or whatever and more like a roulette wheel they'll be a lot less likely to act on its advice.
reddit AI Governance 1762499010.0 ♥ 76
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nnk1oai","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nnk4gnk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_nnkc1t1","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"rdc_nnl4xt4","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_nnkxqku","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]