Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Effective regulation of AI and AGI can only take place at a Global or World level. That much is simple logic. However, today, in our world, there is no Global or World Governance! The US led and Western based “International Rules Based Order” is simply not a World Governance System, is totally antidemocratic, it is totally against the “Rule of Law” and was designed by its creators specifically to PREVENT an effective, just and democratic Global Governance System. A new, just and democratic Global Governance System MUST have the democratic participation of all Nations of the world. Since we cannot effectively regulate AI and AGI, along with all other real existential threats to humanity, without effective, just and democratic Global Governance, then logic dictates that the first goal for humans, in controlling AI or any of the other serious existential threats facing humanity, must be: creating a just and democratic Global Governance System. In short, Global Governance is the Sine Qua Non of solving any of the serious Global problems facing humanity.
youtube AI Responsibility 2026-01-15T18:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx2MWVJgiLmu3TbsIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwWMSlcdWpK8H8ndp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy6P3FMsCkkkNQXBuF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz7e5eK_nUj4xVDrBJ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0xXqLTVuv0R-fxiR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyeICJabngzF8RCF214AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzc6Hv9a4yY6pzafJd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwgxTuxr5GshzUOltp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZqbe2lHcEq8QElIZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy0425joqmE211nPSZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]