Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One problem with present LLMs is that they are only adaptive during training. Once they are trained they are permanently set in that configuration. You can’t continue to train your AI yourself, you can’t change it or teach it new things. A second is that they are single iteration. You can tell them that your protagonist is a cross between Aragorn and Naruto and they will come up with something. But you can’t continue to build on what you got, you have to start from scratch and generate a new blend the next time. And every iteration uses a similar amount of compute, because it starts from scratch. The third is RNG. They aren’t just statistical predictors of likely next words. They’re probabilistic random word generators. So they will come up with different answers every time. Some will be reasonable, some will be inferior and some will be catastrophically bad. You will never get consistency or reliability. Four, they intrapolate. They deliver answers that resembles what most people on the internet might say on average. That is only useful if you really want output that is bland, cliché and blathering. (Which you might. I’ve heard that grant applications benefits from this style. Or did until everyone started writing them with ChatGPT.) But they cannot deliver something original, because if it was, people won’t have been saying it on the internet. They are also unbelievably, absurdly, comically inefficient. A human brain runs on about 20W. Our language processing uses a fraction of that. Meanwhile the GPU chips that the LLMs are running on is burning at thousands of Watts per card. (Three chips per card, 72 cards per rack, dozens or hundreds of racks per chip farm. Compute is measured in MW these days.) Until language models are small and efficient enough to run on your personal device, they won’t play a useful part in AI development. Oh, and the business models of current AI developers and providers is utterly borked. While investments are the biggest In financial history. Best case it’s just a bubble, otherwise it’s the biggest Ponzi scheme ever done.
youtube 2025-12-27T21:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyrzCYQ_xfGOZjdETh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwNR0pu_6IxR3-PeXt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwoSI3nf3NnnEhrfo54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx9V45DbAOfZ6mTYQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyZJZ177NJNbJEJ23V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxEuUPMgBbxGnYBQBB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw1Hh7h3dme67ZJcst4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzs8Dq5lZcO2ecfpFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyECy1JrJdYgzogrut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwYFNNcFnPBkw08JHx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"indifference"} ]