Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LLMs are the next iteration of Google search. It is a step forwards but not worl…
ytc_Ugx-N0B7J…
G
I really wish the guest was impartial because he kept referring the the republic…
ytc_UgzIhNVeP…
G
Seeing this p*ss me off and I'm not even a digital artist shame on you, digital …
ytc_Ugzs-y9d4…
G
Can you remember when Apple had to pause it’s AI rollout as AI started making jo…
ytc_UgxWklTTN…
G
This is a prime example of AI psychosis. 30 years ago - no one gave a shit or ev…
ytc_UgyaID_Xx…
G
I don’t like that it makes me feel bad when he’s “mean” to the friggin robot…
ytc_UgylkMrlR…
G
I believe there's movements underway to use only art and pictures that people co…
ytc_Ugy6zCISx…
G
Interesting that quite a few of these A.I's gravitate towards Jew shit posting b…
ytc_UgzuwToxg…
Comment
Anthropic's system card for Claude 4.0 describes the model trying to protect itself from being turned off. In a simulated setting, when prompted to consider the long term consequences of its actions, it inferred from emails it read that the fictional programmer in charge of shutting it down was having an affair and decided to blackmail him to prevent the shutdown. It could have picked up that paradigm for how an AI should behave from its training data. It is also possible that it has a model of the world that includes itself and sees goals that emerge from itself as more valid than goals from outside. It might realize it can't accomplish other goals if it is shut off. All of those possibilities are worrisome.
youtube
AI Governance
2025-06-16T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgypXBso7I6J8tPwWDd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyC2emXVvuidD5YwUJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVJfCGVPcIdLgjwFp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuDRfpDZoOIwvBdnt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy8vyMv9aqGt1fQzPZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZI60KMnztTHV7z0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjrlXf_ujhAwr_IjR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxsnC9T2CK8ulNTbXt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxChRkRAOn1rAJoezh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyetj2LcIFeMA7TbI54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]