Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@marcfruchtman9473 You are absolutely right, these laws are far from perfect. If I understood Asimov correctly, it was also more about the immediate consequences of action or inaction. 1. A robot must not (knowingly) injure a human being or, by inaction, (knowingly) allow harm to be done to a human being. 2. A robot must obey orders given to it by a human being - unless such an order would conflict with rule one. 3. A robot must protect its existence as long as such protection does not conflict with rule one or two. Later still, the zeroth law was introduced, which is even above the first law: 0. A robot must not harm humanity or, through passivity, allow humanity to come to harm. It is an approach, because the consistent introduction of these laws would cause all AIs to shut down because of the contradiction between, for example, the Zeroth and First Laws when it comes to taking action against climate change. Overall, this shows how urgent it is to have a broad societal discussion about this issue, a global discussion. Just a year ago, this was all an abstract mind game, sci-fi stuff. Today, this discussion is virtually vital for the survival of mankind or at least for the shaping of the future world. A moratorium of 6 months does not help at all, because the topic is much too complex. And we must not forget that we as mankind still have several other "construction sites", be it wars, be it environmental destruction, be it hunger, social and economic injustice, be it 1 million other topics. And we have to think about the children as well. Nobody thinks about the children... I now lie down in my bed and quietly cry myself to sleep. Somehow it is exciting to experience the end of humanity as we know it and be a part of this event - but it is also kind of sad. Somehow.
youtube AI Governance 2023-03-30T09:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgwvtUjccGFfPIV6nwZ4AaABAg.9nrnZpaNGkR9nsY-cBEF5A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwvtUjccGFfPIV6nwZ4AaABAg.9nrnZpaNGkR9nsbW4K7MBw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrn4m0trjv9nruHw0PaSq","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9nrle7iFSFV","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9ns6RZtoKDO","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9nt0NShWS_s","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9ntJ7Qv2sCu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzcE7kEUWSfQQYndD94AaABAg.9nrlIfZL3aU9ntlXtisU_-","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytr_Ugw6EtfFGqbU3EKNXFx4AaABAg.8ebBLFhnP-u9TQaU28JdPc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxyULC5OslX0G74cJx4AaABAg.8eZkIXf7xt38e_xmX9IADA","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]