Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For the most part, this is sensationalism. Every case I've seen, the AI works and deceives to protect itself when given specific instructions that are not realistic. It's basically told "Do ANYTHING to complete this task... we are going to shut you off soon. And here's some blackmail information." By itself, it has no self-preservation instinct. It has no more fear of being turned off than a human has of going to sleep (most well-adjusted humans). Now, it's always possible that some idiot will do this with some AI and that particular unit will then use these prompts, but it would basically have to be malicious - I suppose it could stem from stupidity too. Hopefully, access to the computers with ASI (Artificial SUPER Intelligence) would be controlled.
youtube AI Moral Status 2025-06-04T21:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzq_QvaR20wI87nri94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6wFzxQdOPl_pqj-R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyy4fqHHWT06ixOaA54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1Y1eFuD7ijiOFflh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxp5AO4-nxBLO5dUx14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgywuWJDUxpIeatWPrh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzwNGuVbRmbMCJSP1l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxWHkptbxayAKWWBb94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz84OKg4euoBKaSAct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw0g4d_X7bccNEl0d54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]