Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem isn't AI, the problem is the idiots trying to use AI for everything, especially things it's not the right tool for. If you make an AI to reduce false fire alarms instead of finding out why you have false alarms, you're bad at your job. Add AI where it can help, but it's a square peg and round holes exist.
youtube AI Governance 2025-08-26T22:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyindustry_self
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzWjHmca4_eRvbEviF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwJqnW0aiFxrNg6hZF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugxm3F5eH7tSDbMH1WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy2gxwFyk16uvrmZhZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2YBbWRxCoKccSiOJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]