Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The biggest problem is not even comparing to human education level. The problem is that humans pass those tests as a university as a proxy to how they would approach a real problem out in the world. People don't work at a place, where they have to solve test problems. Tests are part of a process to go from not knowing stuff to somewhat knowing stuff (and later in real life to actually start knowing that stuff when thinking how to solve a non-test kind problem at work). If you train the model on tests and so it passes the tests, it doesn't learn how to solve non-test problems. There may be applications of those models where just due to sheer massiveness of the datasets it can pull things you can't guess on your own, but phd is supposed to be able to solve the problem that didn't come up before (well, ideally). That's not what these models do. Saying that the tool can answer some tests is sort of like saying that a high schooler with a good database of the test problems and a search engine can answer a lot of those problems just by finding a matching one and copy-pasting it. Well, sort of (not exactly) because it can actually try to guess reworded problems, but on the other hand, if a high schooler would do that, he'd have a chance to learn from those problems. LLM would not learn from these problems.
youtube 2026-03-05T14:5… ♥ 14
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyMBdqKmnPcyuWaxoh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxKXCyAjZM9_izFrYV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxC_IpbY5uxN2LF67l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5Mh-IFFvXRXQy4NN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZi-XN-VpqNA3OhDZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyyqSqB1yI9d85v2DN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCvjXcfn7OEQSsil14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwKNgosfTsIXiCirxh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzOtWH8qL51ZVJbsCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8Wx-iOte8HhmOdj14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]