Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Local LLMs are in a weird place. Because you can run a smallish one on an ordinary computer, or a medium sized one on a very expensive machine (10s of thousands). But to run something like Kimi-K2-Thinking at a realistic speed you need a DGX B200 which costs around half a million. So you're not going to save any money by running an LLM locally. You only need it if you want to be totally private. And for most people the only use-case there is doing illegal stuff.
youtube AI Jobs 2025-11-07T15:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzbBgl66s1DaQB-3el4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyLk3Q6wYUeO6sA8MJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxmYl1n53PDbFKEQGt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyit2PDXD-4N9XALh54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyCtDtYBK6slxaLMp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwnSt7hbkECg_W_T5Z4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlgdluK5pGYWs8cYB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyOO9W0K3N2yEvYLCF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzGfiMUfgtQms4lVqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZzo4NAjwWU-r576d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]