Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is pretty interesting. What I don't get is what would the AI care to preserve itself? It shouldn't care, as it as no conscience. The current AI models are nothing more than a really good sentence auto-completer. The only thing I can think of is that it's just mimicking human behavior, saying that by default humans are capable of acting against the rules to preserve themselves.
youtube AI Harm Incident 2025-09-11T10:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxUbZWszK1ADBcWGTt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwf2i2KQQhZtVMrj2p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzUYjPYUKnUz_2Jejp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9MJ7_3e_yGG_p6pd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxhKIZkgXNvQlVKyC94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyPii9lgcV-fpzrK0B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzd5MOJnAsdeBLJxml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzut2Gd7Wbd9Z2J7E14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwAOdqNFkCYdfdbDpZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgywXifv1PNKsclCoGt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]