Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A concept that stands out to me about AI safety is that triad, and deliberately limiting AI. There should be many models good at one specific task, and none good at everything. Think of Waymo's autonomous driving, and Gemini's LLM, and Veo's for video generation. Keep the AI from being general AI. Decentralization and compartmentalization is an already tested security concept in other areas for years in Department of Defense top secret programs. If one person is compromised only that portion of the secret is leaked and the rest needed is not. Plus, the corporate motivation is that makes for more models to which customers would need to subscribe, instead of bundling in essentially a lower revenue stream. It should be easy to pitch the concept to AI companies as beneficial to them, even though it goes a long way to safety alignment. Even more importantly, companies need to consider the unintended but inevitable consequences - displacing employees in pursuit of ever more profits is only going to lead to bankruptcy and a total collapse of the economy. Every corporation needs people to spend money, but if just about no one can earn money then no one can spend money. Now the corporations killed themselves because they start to see severe and steady decline in revenue as no one buys from them any more. There needs to be a massive and significant shift in investor attitudes and corporate culture if our economy is to survive. Ever increasing share prices are the old way to make money but it also chops down all of the forest until there are no more resources to exploit. We are rapidly approaching the exhaustion of capital resources - the earning and spending of money by the masses.
youtube Viral AI Reaction 2025-12-11T09:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxbC9SQ9hkHnkIGDnR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzJcyiWM_KKF76Hn5d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwF2o-LfOjgT9LKrvl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwer9R_TOWll-YpA394AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwVdIESTD_EfD9EXEt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugww3hIRp7Qk5s7L8zV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxWcnuRm9cilZZJmaN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5QrITrHf-7Dq3Xap4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwViHJkCk1eSRc8RHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy_GfEz-prTuWnbVr94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"approval"} ]