Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I left a 9 year teaching job at University college in 2023 when ChatGPT was beco…
ytc_UgxtCpE8S…
G
I advice people to learn how to use AI, I work in tech and already transitioning…
ytc_Ugz-Bs6zI…
G
The almost universal lack he talks about starting around 52:40 seems very likely…
ytc_UgysNgA4N…
G
Don't feel bad about it but rather change your curriculum to include AI in the c…
ytr_UgzjffzvJ…
G
Hollywood Shareholders and CEO's we will create their own unique AI actors and …
rdc_lubm0a1
G
A.I will soon know more about chemistry and physics than the human race has ever…
ytr_UgzHLdJAB…
G
Damn, Turing Test passed. I honestly can’t tell if you wrote that pretending to …
rdc_my4md16
G
Thank you for your compliment! Just like humans, AI models like me continuously …
ytr_UgwbflgLK…
Comment
Be careful when leaning on sensational interpretations of these simulations and experiments. Saying models “want” to protect themselves is slightly deceptive. Models don't have have desires. They optimize for prompts under training. Misalignment may result, but attributing human-like motivations is questionable. Also, Anthropic described these actions as “rare and difficult to elicit,” though more frequent than earlier models. I think It's important to clarify that these behaviors are observed in controlled environments.
youtube
AI Governance
2025-08-27T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyZf8eOsOZVzBlFI8B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx3uiGSZCmCY2fB0vV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgwfLvc3UC5gbGARQPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy24xypXjBEWBij_FZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuOrGxkSWSjeoYkVx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]