Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this is not how LLMs work. It's more like, that they understand a grammar of a language. It is always grammaticly correct and it puts together code, where the grammar makes perfect sense. But it does not understand any meaning of it, so it makes Errors because of that and not becaues it was trainied on shitty code. I think you get the wrong idea of how LLMs work, if you simplify it like this and due to all respect, it is wrong.
youtube AI Jobs 2024-09-29T18:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyDNWESLyGVBGa8J8B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxUFms8pSH7kwM4Ocx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwjA0WZ-ot5dzuHQpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugys2ctb8jRMeP4kzUx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwmaR-u3NQJROfxuDl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4FSCyhO5RnVhYjch4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgznVU3YGjFvkdzxOUd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyVer4j0AXTVl66U554AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6M1cdw522sGL-jgV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwxAaFpOjK8GO_XrHJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]