Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A lot of them don't want writers to use A.I also. Since it's cheap shows that pe…
ytr_Ugz2AFW64…
G
stuff like this only proves to me that my life does not matter and that i should…
ytc_UgxcbSVge…
G
The thing with AI is i often wonder if the programmers put in a software where i…
ytc_Ugz1E63PG…
G
Actually it can. I dread the day when AI starts to give itsself motor skills. So…
ytr_UgxNVQ7eP…
G
When a person gives an AI instruction as to what picture they want the AI to mak…
rdc_jwv16h7
G
A better way to do it would be using picrew! I highly recommend the website beca…
ytr_UgzAUhg2T…
G
37:00 - "....particularly if they're educated in America..." Hinton displays th…
ytc_UgyXSwOqS…
G
Right. After watching the entire video here's my take:
Even IF it never get's a…
ytc_Ugy1pg6e_…
Comment
Good video but its kind of misleading. The Opus escape plan to avoid shutdown was indeed a scenario test by Anthropic's Red Teaming. Meaning the Adversarial Testing got the results of self preservation which was actually expected because of the constraints it was given.( We'll circle back to this about an earlier event you mentioned.. )Not surprising. In other words, you can't ask a chatbot to roleplay a villain and then be like.. "Oh shit, this chatbot IS a villain." If you create adversarial constraints designed to surface edge-case behaviors, you don’t get to treat the results as unprompted intent. The base model is not a true self, it's the lack of self. Its raw engine pattern completion without any ethical direction. RLHF is also not really the mask, its human preference. But I understand the misinterpretation, because as "Human preference data" Its basically talking in a way that allows users to feel comfortable based on what we expect of it. Its only the refinement of statistical processes. Back to constraints, the AI model that wigged out and called itself a failure looks like the result of constraint contradictions. When the user creates what they think is a very specific prompt, actually ends up creating contradicting constraints that put the model under pressure to meet the requirements of the user while still trying to abide by the RLHF. It throws itself into a recursive loop the same way someone wakes up in the morning to self prep in the mirror before a big game. A lot of this is really explainable.
youtube
AI Moral Status
2025-12-28T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzRSkRWh9Vo9K2kKTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2MD7ta4Vr2aWRtzp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHaaBFVsQ7r0-qiI94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzVOzTGEqHoLlSppZl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXcQMzTfqSmUr8gW14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxJr0GBHe3GdNP39mR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwwEtMVuan6PJXhMrN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5ERCDkyxmBPKgZj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzbcjIo3PbienK5Zpp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwCsmyHkJmXtoeRQNt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]