Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree with you in many ways, but my take is that the opposite is happening in terms of reactions... I see a huge number of people downplaying and dismissing what chatgpt can do because of the incorrect (BS, more precisely) responses it gives. They are reacting to it's output at is it were supposed to be correct, as if there was any expectation that it was looking up information to give to you. It isn't a search engine; it's a language generation tool. All it is trying to do is predict what language would come next in a given context. And it isn't just parroting or cribbing existing content; it's generating new language, based on the sum total of what it's been exposed to, which is essentially the same thing that humans do when they are "creative". It's basically a much better version of the suggested words above the keyboard on an iPhone. The fact that it can do as much as it can just as a byproduct of being trained on so much written material is remarkable. As far as I understand it, it hasn't been explicitly trained to solve physics problems, write computer code, or translate beteeen English and Chinese, and yet it can do all of that things shockingly well (but also imperfectly). It is already remarkably useful if you don't expect it to do things well that it wasn't designed to do. Once this kind of language model gets combined with actual search capability, information databases, explicit instruction on actual skills, it is going to be much much much more useful, even if it doesn't have is own intentionally. Most of what you say about garbage in garbage out is correct. But it's even more true of humans, and I see a lot more potential for improving algorithms than improving people unfortunately. Edit: fixed auto”correct” errors.
reddit AI Governance 1676251540.0 ♥ 60
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j8bc1ta","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_j8awh01","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_j8b0aw1","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_j8ce8ur","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_j8b6oti","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]