Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You're really an optimist. That's nice. Unfortunately, probably not a realist. (An optimist is someone who doesn't see things as tragically as they really are 😆) AI will in the future take over the work of many people and cause unemployment to grow enormously. The fact that this isn't the case yet is no argument whatsoever that it can't be the case in the coming years. You say that AI's tendency to give wrong answers rather than no answer at all. Yes it is a real effect — at the moment. But this has already improved dramatically, see Anthropic. The reason for this is largely already understood and lies in flawed reward strategies during the training phases. That will soon be a thing of the past. You say that AI makes too many mistakes to be trusted. That's why it couldn't replace people. Seriously? Then why are there so many employees who make mistakes and still don't get fired? Because making mistakes is human? 😆😆😆 Because humans make mistakes, we have tests, development processes, bureaucratic approval processes, elaborate reviews, test engineering, unit tests, integration tests, system tests, tracing, logging, quality monitoring, and so on. The fact that the coding process is error-prone has never stopped investors throughout the history of software development from hiring coders. Why should that be the case with AI coders? Sure, you need human oversight and control, but AI makes coding so much easier that you need fewer people to do the same work. And a brief look back at history helps too: of course automated looms require human operators and technicians, but right after the invention of the power loom, 70% of weavers lost their jobs. There aren't many weavers today either — do you even know one? The profession has been virtually wiped out. Something similar applies to agriculture. Two hundred years ago, 90% of the population worked in farming. Then came agricultural machinery.
youtube AI Jobs 2026-03-22T08:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwaolCSCrbAmzmXPy14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzE5vuj-DLOECBsDzV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz_OZ-y_IxebN0dWOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3cdAjIfc7VJckYAJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwX7qjzo1jevbrvHc14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1DARepnEMK3u2lPB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgytrV3NOnPLyOb7RQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzgFuK7fSuKvYnKJAh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRrFppSFBbTWXiKO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxlVaW17VPUUpRsX4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]