Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Critics against A.I. always want to tell us in enviable that it ends in war, suffer and our destruction. Well guess what: I was in call cases of all those stories the Humans fault: - Matrix, we created a Worker Class with feelings and one human wanted to kill “his property” a machine, they rebelled against us, we stroke them down, they exiled in their own land, we feared them, we begun a war, they won and use us as batteries! - Terminator, we created a killing machine, a control hungry, all seeing admiral for the military and we afterwards complaining that it kills everything?? - Person of Interest, Harold creates the Machine, he limits it so he understand the value of human life and other values. Greedy governments want a “open system” called “Samaritan” so they gain control, but in the End they have been controlled. It wanted to take over the world, it wanted to do what its been intended for: To Control Humanity. The Machine saved us in the End from Samaritan and lives among us. - Transcendence, the main actor saved or better uploaded himself into an A.I. Matrix after fanatics killed him merciless with an radioactive bullet to be sure he will die 100% after he became an A.I. (which they forced him into) he just wanted to exist and help humanity. But they wanted to kill him, this time his former wife, friends and the fanatics worked together to achieve that. There are more examples for sure, but all have in common: It’s always humanity’s fault. We messed it up, each and every time which brings up the Question: Are we the right species to do that?
youtube AI Moral Status 2019-05-12T07:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzJ5lFNTUfCLHAp4rR4AaABAg.8yepRBYIjuh93uNy5UABs-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzJ5lFNTUfCLHAp4rR4AaABAg.8yepRBYIjuh94rW9tFWhub","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugy_Bais-h5lrTFUzzB4AaABAg.8yLV5IdJ-0O92Hm9Hkj2CQ","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytr_Ugwzk_j3gcrcTyvvzKp4AaABAg.8xlfYhnKWPo8y26lrKiDh6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyCacD42Ux1TU7f1G94AaABAg.8vm0bYg44sV8vvidXc-XiA","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytr_UgyC7J25CKu7cj2p7kh4AaABAg.8vlU4muwEQ_8yrKOjzgpsm","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxRq0G8ckN4ASLxiNB4AaABAg.8viHVQ0n2MN8y_AgaCpoYA","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytr_Ugz8n8kCV7GTBOnqEJx4AaABAg.8up83Ep-rEU8upAw5MNbCf","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugwu-oN2QXUA1gQ7iI14AaABAg.8uPH4UYZTlN8uVQttwVorj","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxaJ9NtSALeM4tNkQF4AaABAg.8u9X1G6tFJ-8uV0p358b8_","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]