Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One day humans will create a robot, the robot has only 1 task to make the world a better place, then this robot will create a 100 times more smarter robot than it self, some time later this new robot will create a 1000 time smarter robot than it self...so it goes on until there is a robot that is 10 billion times smarter than the very first robot humans created.This super smart robot still has the same goal to make the world a better place.So this super smart robot will see humans as less intelegent perasites that are destroying the world ,so the super smart robot will find a way to kill every human alive then self destruct it self...making a world a better place.Because no humans means : no wars no greed no corruption no evil And all the other things humans bring with them. I know that this sounds like a script for a movie but i legit think that there is a possibility of this happening in the future.
youtube AI Moral Status 2020-04-28T19:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyxeYNMIgE6M4Pj5qh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzUtmXseQU6CQWEFp54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy4dKp3ftFlADqMHAp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwki6KwPald_rMEABV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxCtOO6lAbkQZoPLhp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwx9E7AV4O1YWG60IV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwZ03JxSsYrwkhmqQR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyjMnyKf85hJFPkBvt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxOjdtBjrmgkMUbqZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw7VUyBJfduNh57ZwR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]