Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My bet is on small, specialized LLMs that can run locally. The big cloud stuff left the awful phantasies behind (GPT3 made the suggestion to buy an Audi A3 for towing a big trailer, LOL), but still has problems in many areas. My go to test is to let ChatGPT generate an image of a sailing catamaran using Dall-E and although the result looks nice at first, the generated image still shows a badly functioning yacht. Things are different when you start using local LLMs. Due to hardware restrictions, my currently best running LLM ist mistral:7b, a downsized version with usable coding knowledge, language support and overall world knowledge. I can use it to improve code and get help when I am looking for what I need to change to achieve a specific behavior (take THAT, CSS!). It is not perfect, but so far it was useful in my work with Angular and NestJS. This is also supported by the availability of systems with a lot of shared memory. Although it is slower than real VRAM, it makes it possible to run larger models on a MacBook or something powered by AMDs Ryzen AI series. A medium sized LLM (20-30 billion parameters) can have very good world knowledge and if it is optimized for a specific purpose (reading documents, helping to code, looking for abnormalities in log files) has the potential to safe a lot of time and work. All that said, the use of LLMs can lead to catastrophic misinterpretations of the data given, so the models have to be tuned to give info on their thought process so you can manage and fine tune them. The way of Google to dumb down search and try to shove down Gemini as the next big thing is an example of what not to do - the results are sometimes funny at best, but can be deadly. (Maybe we, as humans, have the wrong perspective of what AI should replace?) There are a lot of areas where AI hasn't even begun to get traction, although it has the potential. Just today I was taking measurements of my trailer and scribbling it down - a well trained LLM could
reddit AI Jobs 1754774681.0 ♥ 9
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n7s3olj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_n7s0w22","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_n7vooie","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_n7u8i1p","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_n7s01vw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]