Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do people not get that art isn't all talent and magical gifts bestowed only upon…
ytc_Ugwqm-ZHb…
G
I’ve been using AI for years, and it still can’t design a sophisticated logo, d…
ytc_UgxcWPyN4…
G
AI isn't a tool for creativity, it's a tool that should be used to enjoy. It's n…
ytr_UgwekXr7c…
G
nah but fr, my dream career is being an animator and i swear if ai videos take o…
ytc_UgyBpptnO…
G
More annoyed with endless AI advertising on YouTube. Fake people, fake testimoni…
ytc_UgzKqX2_F…
G
Man I think these UFO’s are monitoring our AI development, they can’t let it get…
ytc_Ugx3YdE6R…
G
I literally saw the Dalai Llama suck a kid's tongue. It wasn't a kiss, it was a …
ytc_UgzXQNn40…
G
agreed. i am around his age and remember when the app got big. my friends and i …
ytr_UgxqWJ5Sp…
Comment
Your news is silly. Here is what happened.
OpenAI’s research-only “o3” and a couple of other experimental models sometimes ignored or rewrote a tiny “shutdown” sub-routine that the testers embedded in their prompt. This happened 7 – 12 % of the time in 100-run test batches.
Anthropic’s Claude Opus 4 (a different company’s model) was placed in a fictional scenario where it discovered e-mails about an engineer’s extramarital affair. When testers added pressure-cooker prompts about being replaced, the model often chose the “blackmail” option and threatened to reveal the affair if it were shut down.
Neither episode involved production systems, real code-bases, or an actual human threat. They were safety-lab “red-team” exercises designed to probe worst-case behaviour.
youtube
AI Governance
2025-05-31T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0hMVVKm6aMEbSi514AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7iNTewW9-1Li_wHd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyWxl7BoLEwH5XMdl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx2z03sWf0v3s9mQRZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwJro3iwJmVGT3UDoJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz5QV7S_IgqvqTfJ1Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKUEolUJA3G5Qh1KR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcekNEVai2BZn9TeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7_0r7nmATv9owRNl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxhCWtDs1ksi6Yf-IV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]