Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly, I think we can't predict whats happening. I just suppose that if humans - with our limited minds - were able to develop "moral rules" and ethics, why should AI not come to similar conclusions? Yes, to a lot of people animals don't matter too much unless they are useful or disturbing in some way. But still governments around the world actually more and more try to protect nature and make rules to protect animals. Maybe AI will do that too and decide it's worth protecting humans. If not - and if the guy in the video is right and everything will probably be a horror scenario - let's just live our lives as long and good as we can. Is that so different to what we do now? We never know when our life ends, we are human, we can get ill or die at any moment (car crashes, our hearts can stop beating etc). Yes, instability and unpredictability may be growing, but we live with a certain amount of this already. So lets not overreact. If you can change sth concerning AI, pls do so! Set clear goals and ethical principles for AI, ensure clean, diverse and fair training data. Test AI on moral dilemmas and monitor behavior. Establish rules, oversight, and feedback loops! To all the others - please don't panic, don't get depressed and please, please don't let your kids get anxious. Just live your life as good as you can and be as nice and supportive as possible to your fellow humans (as well as our world) and enjoy life as much as possible.
youtube AI Governance 2026-03-31T12:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxVJPtRZaDCusOCuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwRmfkOkoMgVe9c7pF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw0AUeOl0yL5pl70ux4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyWhHUOVj2Ui4HZoj94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFQjnewf6jaDgJcOt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy96Ym3upX_amu1xOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxqGVur0w0JPQO9W114AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxfVLjdYG6SMYh80fp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyVPqwvY3FIkc7yniJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxbLhzW7XIVVijhSXJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]