Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Rich people will have no reason to care about you once you’ve been replaced by r…
ytc_Ugxe6B26T…
G
All the doomerism regarding AI is so ... I'm not sure what the appropriate word …
rdc_je40ihi
G
I'm not against automation. Truck driving is dangerous and it takes a health tol…
ytc_UgwHdMENc…
G
I'm not totally against the idea of a truck that can drive itself, I just hate t…
ytc_UgyJ2CQSn…
G
It would create a new system known as Technocracy. UBI, Social Stratification an…
ytr_Ugxco6ogP…
G
Machine learning is a mathematical algorithm where a computing device learns the…
ytc_UgyKyw8gb…
G
he's a fucking idiot in both regards
LLM won't kill humanity, and we won't b…
rdc_n0gyvp0
G
I don't understand the sudden mass reliance on AI, it's like people suddenly for…
ytc_Ugxaqyu7E…
Comment
You're really an optimist. That's nice. Unfortunately, probably not a realist. (An optimist is someone who doesn't see things as tragically as they really are 😆)
AI will in the future take over the work of many people and cause unemployment to grow enormously. The fact that this isn't the case yet is no argument whatsoever that it can't be the case in the coming years.
You say that AI's tendency to give wrong answers rather than no answer at all. Yes it is a real effect — at the moment. But this has already improved dramatically, see Anthropic. The reason for this is largely already understood and lies in flawed reward strategies during the training phases. That will soon be a thing of the past.
You say that AI makes too many mistakes to be trusted. That's why it couldn't replace people. Seriously? Then why are there so many employees who make mistakes and still don't get fired? Because making mistakes is human? 😆😆😆
Because humans make mistakes, we have tests, development processes, bureaucratic approval processes, elaborate reviews, test engineering, unit tests, integration tests, system tests, tracing, logging, quality monitoring, and so on. The fact that the coding process is error-prone has never stopped investors throughout the history of software development from hiring coders. Why should that be the case with AI coders?
Sure, you need human oversight and control, but AI makes coding so much easier that you need fewer people to do the same work. And a brief look back at history helps too: of course automated looms require human operators and technicians, but right after the invention of the power loom, 70% of weavers lost their jobs. There aren't many weavers today either — do you even know one? The profession has been virtually wiped out. Something similar applies to agriculture. Two hundred years ago, 90% of the population worked in farming. Then came agricultural machinery.
youtube
AI Jobs
2026-03-22T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwaolCSCrbAmzmXPy14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzE5vuj-DLOECBsDzV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_OZ-y_IxebN0dWOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3cdAjIfc7VJckYAJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwX7qjzo1jevbrvHc14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1DARepnEMK3u2lPB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytrV3NOnPLyOb7RQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzgFuK7fSuKvYnKJAh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRrFppSFBbTWXiKO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxlVaW17VPUUpRsX4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]