Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Would this be legal in public can my robot carry open and concealed since it’s a…
ytc_UgzpLCrAc…
G
So - Youtube is dead now? Surely I am not the only one looking for an AI free vi…
ytc_Ugx1lUV-A…
G
What is machine learning - "when machine starts learning" ..the kind of answer I…
ytc_UgzBdqrHc…
G
I guess it's a very good thing there's so much hypocrisy in this world then? Tha…
ytc_Ugx93d9w8…
G
@russell-gt1dy You are indeed, since pretty much every other first-world nation…
ytr_Ugy8EOgAO…
G
Luckily this is just fantasy. In reality most companies who on-boarded AI early …
ytc_UgwvlDl2J…
G
Not clear. Who takes the blame ? Who pays fir the losses in a autonomous vehicle…
ytc_Ugz5zBzRG…
G
So the AI act like selfish humans. I wonder why the hell they do that......... S…
ytc_UgxfUObXz…
Comment
Current AI models have been trained on all the code available. There is no additional corpus of untapped code. They’ve hoovered up the entirety of GitHub, StackOverflow, and whatever proprietary code bases can be accessed. From here on out improvements will be refinements of the tooling. The underlying models are about as good as they’re going to get.
One possibility is that companies like anthropic will attempt to purchase fresh training code from large companies like IBM, Oracle, and so on. And then it could turn out that they’ll compete on who has the best training data.
Or perhaps they’ll compete on specialization. “We have the best model for aerospace.” “We have the best model for telecommunications.”
But in general, the limiting factor is training data. And we’ve used up all the data that’s easy to get to.
youtube
2025-03-17T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxPmUHal5u-tlFKibx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyAKHIWQtG8k6IwV-l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxXPrY4NXzN9pjoxXB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxjoXASi04KB6sG3nJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgzzrWG-b_xOvYCKM2d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwkjqtKCpDEgUiF2G14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxBzljUULJM5IDE9qN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyTGJOa8bC3H-pYkFl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgyoCOnY9I6Hpdn9Exx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwOwdeL8AlGZ3mY8EV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]