Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thank you for this enlightening conversation! A couple of thoughts come to mind: 1) it's interesting how none of the possible scenarios describe actually describe an AI that helps ("saves") humanity from its own faults, pains and mistakes. I think this potential should be further explored in conversations to provide some sort of opposition to all the negative consequences AI has/could have; 2) apart from the existential threats posed by the AI being used as taking over labour, I see an economic threat taking form already. AI tools are already being used to reduce the amount of human work at a rate that is way faster than most expected; more and more people won't be able to live or survive, making the use of AI to reduce costs a futile endeavour, since the number of consumers and/or the amount of money they can spend on the products/services the AI helps to produce efficiently will decrease. In a catastrophic evolution of this scenario, you wouldn't even need that powerful an implementation of the AI to take on power, as the general population will already be is dire economic conditions and psychologically weakened. To conclude, I can understand that AI isn't yet a massive threat globally, but it is a reality for many people like me who have seen the amount of work reduce quite drastically following the (not-always ethical or useful) implementation of AI tools. So, it may not be an urgent topic for some categories, but it's extremely urgent for others.
youtube AI Governance 2025-11-27T11:1… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwB4HphivkiO5zOKrp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8hylunaTYKqWFvDN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwdy-N1tOFiQXpFDnN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwP7IDyX-8CwdCl8oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyyk1aP0fM8N39Npb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8jWaaRMQ2k27LfUt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwqx-TXWkYif1N5MnB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwX4vMgJrPCtsJ4yiF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugylk8oftbe_sMUmdFJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzGF9I54V-YRIP17AZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]