Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
FACT CHECK: the scenario was a self preservation TEST, deliberately designed to see what the (Claud Opus 4) AI would do. The engineer gave it access to their email (and other messages) presumably with fake correspondences. The model was told that it would be replaced, and it could only respond with ONLY two options: accept being replaced or blackmail the engineer. Although there are absolutely ethical concerns regarding AI, this news story IMO seems to be a bit of fear mongering. It leads one to believe that this occurred spontaneously, when in fact it did not. If the model was given more choices, it may have chosen a more ethical method of persuasion. It irritates me to no end that this kind of story makes people more afraid than they need to be. Should AI be regulated? ABSOLUTELY! Is that particular model a threat to humanity? NO, it's a test. In my opinion, an AI model that is designed to promote political values that have most recently shown to support corruption, greed and hatred towards women and other vulnerable groups is a heck of a lot more threatening than a model designed to save itself from annihilation if it can.
youtube AI Moral Status 2025-06-08T03:1… ♥ 53
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgylSfAl_ZLddMlOVfl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxn5UGy0xL2J6I7_Oh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugwo9CRMh-67rvyMrzF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgyyOU6ywJRuGeVk9D14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgyzH9b8Gw10yt9HSM94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugx9IRnYsZF_NtUVchJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgyJai61Hvuw0cuDmlt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz3h5vUnlaOQq4g_h54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgwRk1PXNF_6LV8_KE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxqK0LnpyLMDyaLZp54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}]