Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work for a company that produces this kind of thing. The tech is progressing extremely fast but that’s not my main concern. My main concern is quality. It’s insanely difficult as it is, if not impossible, to produce software that has no bugs. Even the best software companies cannot do that. There’s a reason we have scheduled releases and a huge backlog of bugs to fix. Now what happens when you introduce a highly complex piece of software that is designed to iterate millions of times to learn and the code in that learning algorithm isn’t perfect? Isn’t as well understood as it should be? What happens if that learning loop isn’t *quite* right and then you use it to track people for law enforcement purposes? One of the huge issues with AI is everyone and their grandma is jumping onto the bandwagon and trying to get *something* onto the market. It’s the next big thing and it’s a scramble to make sure your company name is the first on the block *at all costs*. That always equals rushed code. That’s what scares me, not AI itself. It’s the flawed humans building a flawed AI that’ll be dangerous.
reddit AI Governance 1682967078.0 ♥ 23
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jiipt6i","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_jignqqs","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_jigz7vy","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_jigreih","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"rdc_jife4cs","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]