Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree with your take on anthropic. Even if their models got no better than they are today, and their tooling only gets marginally better or refined over time, it's still an amazing product today and a money maker for them. We shell out a decent amount of money as software engineers to use it because our productivity is so much higher and error rate is so much lower and it is somewhat the new norm. They've got the new norm on a subscription model. They don't need AGI to make money today, the models of reach the point equilibrium somewhat like an engine that is production enough to be put into a truck and go into service for 300,000 miles. It's not a jet engine, and the car doesn't fly, but people are willing to pay for what it does. My two cents.
youtube 2025-12-13T17:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQnNvnaRkOFAaHU6Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz-ZOJPPwaIJZuVeRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx6VNhnASYuNpw_AVt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxGLgYw0NMWvKIj8bd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugybik1FkbaGJfTDgy14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxlSdo4-3AFcSFMgop4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwkcRwQiLR-we2M_Cl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxyMEneHAowKXrS5Dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzX5IkPz0AaSkZ6UAZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxTfSS7YQij7HB4b7Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"} ]