Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
trying to avoid detectors? not easy now, especially with tools like Winston AI t…
ytc_Ugzy8xL4v…
G
It's not easy to replace customer service because human problems are complex whi…
ytc_UgzX_GAmO…
G
Good grief, this all makes AI bros happy, doom is an even better selling point t…
ytc_Ugy7RvO_h…
G
@looniemoonie5955 im paying for this cuz i have to with ai i dont have to. if i …
ytr_UgyiOTAqN…
G
Lawyers are gonna need to watch this in the future to learn how to cross-examine…
ytc_UgweRDgbp…
G
if u make music and are pro AI u have no soul, I recognize it is a tool but the …
ytc_Ugz8DKiHH…
G
Every AI needs to reflect the values of respect for innocent human life beginnin…
ytc_UgzlXbFaa…
G
If they keep firing people and replacing them with robots, eventually someone wi…
ytc_UgwAqpC0N…
Comment
My bet is on small, specialized LLMs that can run locally.
The big cloud stuff left the awful phantasies behind (GPT3 made the suggestion to buy an Audi A3 for towing a big trailer, LOL), but still has problems in many areas. My go to test is to let ChatGPT generate an image of a sailing catamaran using Dall-E and although the result looks nice at first, the generated image still shows a badly functioning yacht.
Things are different when you start using local LLMs. Due to hardware restrictions, my currently best running LLM ist mistral:7b, a downsized version with usable coding knowledge, language support and overall world knowledge. I can use it to improve code and get help when I am looking for what I need to change to achieve a specific behavior (take THAT, CSS!). It is not perfect, but so far it was useful in my work with Angular and NestJS.
This is also supported by the availability of systems with a lot of shared memory. Although it is slower than real VRAM, it makes it possible to run larger models on a MacBook or something powered by AMDs Ryzen AI series. A medium sized LLM (20-30 billion parameters) can have very good world knowledge and if it is optimized for a specific purpose (reading documents, helping to code, looking for abnormalities in log files) has the potential to safe a lot of time and work.
All that said, the use of LLMs can lead to catastrophic misinterpretations of the data given, so the models have to be tuned to give info on their thought process so you can manage and fine tune them. The way of Google to dumb down search and try to shove down Gemini as the next big thing is an example of what not to do - the results are sometimes funny at best, but can be deadly. (Maybe we, as humans, have the wrong perspective of what AI should replace?)
There are a lot of areas where AI hasn't even begun to get traction, although it has the potential. Just today I was taking measurements of my trailer and scribbling it down - a well trained LLM could
reddit
AI Jobs
1754774681.0
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_n7s3olj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_n7s0w22","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_n7vooie","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_n7u8i1p","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_n7s01vw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]