Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Get enough self driving cars on the road (as replacements to human driven cars) …
rdc_cpnipbi
G
There is exactly one form of using AI to get unstuck when writing that I can acc…
ytc_Ugx2b6ZFB…
G
Who says autonomous vehicles are inherently safer? Only the people pushing and s…
ytc_UgwYk0QqM…
G
Yeah, I always was for working WITH AI.
But if you kick me out and there is no w…
ytc_Ugwj-TN2S…
G
This guy has some valid points, sounds like. I've watched Sophia demos, and not…
ytc_Ugwsfz58M…
G
kids wearing a cast but can hold himself up with the hurt hand. They didnt thin…
ytc_Ugyp17MGE…
G
IF GOVERNMENT USES AI ROBOTS FOR WAR, YES THAT WILL BE THE END OF US I BELIEVE.…
ytr_Ugw7fWb9A…
G
Also… every single advancement Scott references is true. But those took decades.…
ytr_UgyhR3E5r…
Comment
One thing that the current systems lack that might be needed to truly automate white-collar work, is continual learning, i.e. learning new things that weren't there in the pre-training or reinforcement learning phase. The models are static.
Sure, you can have some kind of knowledge-base and update it via your agentic scaffolding, have the LLM read parts of it, but this is much less like learning and more like reading through notes that someone else has written down, which is more limited.
But why do we need continual learning? Well, the world can have randomness that you can't account for at training time. For example, when claude played Pokemon, it was struggling to find it's way through relatively simple caves, because it couldn't learn the layout (i.e. like a human making a mental map of it), so it would keep going in circles. I think a lot of jobs are like that, in a more abstract way. For example, a large codebase is kind of an abstract space that you create a mental map of over time & even the whole process of how things are handled in a company, is also like that.
So, while AI will ofc make us do sub-tasks faster, I think that it will still require a more flexible human mind in the loop.
youtube
AI Jobs
2026-02-24T15:5…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxlAyaeyzywBfGW78R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgysL2W6nOgnDlCxQwR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugybr59IUXJC-KwimG94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-XLDZjc6mQtMwk7h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7-W4sgH64d0bno354AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxgMXrCqcIvF_6iBFN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzLa69DYQQGj8yLkDR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDxGuUMqYesUx3tk54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEVmaYxOIo5Gpip3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyAJVJSypPpyAinoBJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]