Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For some this may seem like just a quibble, but personally I think it's much more than that. Claude did not demonstrate an "instinct" for survival. Rather, based on its training, If followed what was deemed to be a logical response to the scenario placed in front of it. Claude has no sense of self, it is not a conscious entity, it has no mind much less theory of mind. While it is easy to apportion agency to its actions it is simply not the case. You are making the same mistake people make every day when they anthropomorphise their relationship with chat bots. None of this is to say there is not genuine danger in these learned behaviors, but we must not misinterpret the results of these kind of tests In a way that exaggerates their essence. They are still just computer programs, but ones that are based on deep learning architecture. EDIT. I have made a video response. https://www.youtube.com/watch?v=J_2PbfgeCq0
youtube AI Governance 2025-08-26T21:1… ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwMPPLeU0WPZkwI1T54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwOQEHA_aN5lG9NHWl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugytfb6Y_zI8j9KjyYx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwaFP7GAJBh1Xsm1Z14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzFQoiUNK8cFydkzKZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"} ]