Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They will get better, but these flaws are fundamental to the methods used by LLMs. The process they use is "regenerative" which makes fixing these problems entirely impossible. What they have been doing is increasing the amount of data used for training while adding more back end processing power to the systems so they can have more parameters and read larger contexts. This makes the data regeneration *more* accurate, but it does not change the underlying structure of how the models work. So they have to add in all sorts of error checking, but that process is regenerative too, and so it is also a point of failure. So they will get better and faster (with a scaling increase cost to run them) but they will never get to the point where they can be used without human intervention unless they rework the underly framework of how LLMs work.  Obviously that is probably possible, I personally do not think there are any physical laws preventing us from developing better ways of doing machine intelligence, but until it actually happens it is all speculation.
reddit AI Governance 1757777438.0 ♥ 8
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ne0hh8u","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ne0arzv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_ndzmkld","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_ndz3o1p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ne1a0sk","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]