Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Current AI models have been trained on all the code available. There is no additional corpus of untapped code. They’ve hoovered up the entirety of GitHub, StackOverflow, and whatever proprietary code bases can be accessed. From here on out improvements will be refinements of the tooling. The underlying models are about as good as they’re going to get. One possibility is that companies like anthropic will attempt to purchase fresh training code from large companies like IBM, Oracle, and so on. And then it could turn out that they’ll compete on who has the best training data. Or perhaps they’ll compete on specialization. “We have the best model for aerospace.” “We have the best model for telecommunications.” But in general, the limiting factor is training data. And we’ve used up all the data that’s easy to get to.
youtube 2025-03-17T19:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxPmUHal5u-tlFKibx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyAKHIWQtG8k6IwV-l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxXPrY4NXzN9pjoxXB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxjoXASi04KB6sG3nJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgzzrWG-b_xOvYCKM2d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwkjqtKCpDEgUiF2G14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxBzljUULJM5IDE9qN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyTGJOa8bC3H-pYkFl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgyoCOnY9I6Hpdn9Exx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwOwdeL8AlGZ3mY8EV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]