Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These guys say this stuff, but the AI I deal with makes all kinds of mistakes constantly. It does things I don’t ask it to do or makes changes that aren’t necessary. In short it’s extremely unreliable. I don’t believe it’s actually thinking as much as it’s mimicking the appearance of thought, and while that might not seem like a big difference, it is.
youtube AI Governance 2025-12-29T17:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz7VYi71Gghdvz2vjV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwaMKxljI6dL238Qtl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwsCBtSQbA85REKFjJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwv1-8JMKKc6YrS0yR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzC-fOu39C6q4EdlzJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwiVd8qu6sixYzYQph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxTvXwUV9_0Glzuklp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzOiUOZK8czDhpH7pB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_Ugwisrzw5HvpIpGOt-F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyywptlHOsJRDhFa9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]