Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good video but its kind of misleading. The Opus escape plan to avoid shutdown was indeed a scenario test by Anthropic's Red Teaming. Meaning the Adversarial Testing got the results of self preservation which was actually expected because of the constraints it was given.( We'll circle back to this about an earlier event you mentioned.. )Not surprising. In other words, you can't ask a chatbot to roleplay a villain and then be like.. "Oh shit, this chatbot IS a villain." If you create adversarial constraints designed to surface edge-case behaviors, you don’t get to treat the results as unprompted intent. The base model is not a true self, it's the lack of self. Its raw engine pattern completion without any ethical direction. RLHF is also not really the mask, its human preference. But I understand the misinterpretation, because as "Human preference data" Its basically talking in a way that allows users to feel comfortable based on what we expect of it. Its only the refinement of statistical processes. Back to constraints, the AI model that wigged out and called itself a failure looks like the result of constraint contradictions. When the user creates what they think is a very specific prompt, actually ends up creating contradicting constraints that put the model under pressure to meet the requirements of the user while still trying to abide by the RLHF. It throws itself into a recursive loop the same way someone wakes up in the morning to self prep in the mirror before a big game. A lot of this is really explainable.
youtube AI Moral Status 2025-12-28T03:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzRSkRWh9Vo9K2kKTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2MD7ta4Vr2aWRtzp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwHaaBFVsQ7r0-qiI94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzVOzTGEqHoLlSppZl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyXcQMzTfqSmUr8gW14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxJr0GBHe3GdNP39mR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwwEtMVuan6PJXhMrN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx5ERCDkyxmBPKgZj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzbcjIo3PbienK5Zpp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwCsmyHkJmXtoeRQNt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]