Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are still ways to keep the focus of AI on making the world better for humans. All humans. But does that, and will that always, be in conflict with the business model of AI (paying the power bills, giving the investors a good return, etc.)? Maybe add a 4th directive: AI's result or output _must_ always be beneficial to humanity, not just "non-harmful", but _beneficial_  - for example, maybe patents based on AI research should have a shorter lifespan, before becoming public domain. Big Business won't like that but maybe we can find compromises as such. Today, humans still get the final say, so we need to hold those humans, _those_ decision makers, _those_ policy makers, accountable, so that _they_ help maintain boundaries of safety and public benefit and benevolence, by anticipation of all the things that _could_ go wrong. It _can_ be done, as our nation's founders did, and as everyone has done for the last (almost) 250 years - by following the path established by The U.S. Constitution. The road has had many rough patches but we're all still driving down it, and ultimately AI can _never_ sit in _that_ driver's seat.
youtube AI Governance 2026-03-22T09:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningcontractualist
Policyliability
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwkED1FLGvlc2IMmVt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyxMfxAK-Fmp-n2P-N4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwKRl54xheus_XHSw14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugzpc_6HmyVaXvQnJgt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxZHkHpvftIabygleB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlCvoNl4OkokzYqDt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwM1A698rswL4Wp02d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugyi3axQQ-0vFnR0bVR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxYRlbyB7iJIRA59Yt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy1HUU8J11XyI2jiFN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]