Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's been interesting work replacing MLPs with KANs and getting the same performance with 10x+ less parameters. So you can run 10 agents in parallel for the same computation as older models. Or run one agent with a fraction of the latency. If American companies can figure out how Chinese models fixed context window scaling, those two enhancements will make today's AI look foolishly bad.
youtube AI Jobs 2026-03-22T16:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgygH_cpjGesOEnXVcJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzKo6ZlOPjDYnxS7b14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxIo1whsSrYrb_shUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwT6IQheQxqLbX2e3B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy17BdUhqayUxql2-94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyAKZUr0rx193pIDah4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwT2o_zWGBAC4Av0b54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyKhHo9wC5CQV3dg9p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzUDfozkEAjmNbOVvx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyW8hy3YXvMsJZW9ER4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]