Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Atrioc was looking at deepfake pr0n of other streamers when his gf is a target o…
ytc_Ugyt_OM-p…
G
Shadiversity's downfall was funny to me. He wrote a book because he had interest…
ytc_UgwQzaL7z…
G
AI had good potential however its been corrupted by the 1%. I was pro-AI but the…
ytc_UgyS9F4E5…
G
I enjoy Ai and I completely understand you, Ai just isn’t easy enough to control…
ytr_UgxKjHQ4s…
G
A real job with dangers for life, which of course can't be automated by AI…
ytr_Ugx8GYDFF…
G
I've ever wondered, is AI the destiny of humanity? Is it fated to happen, the fi…
ytc_UgwA3_60S…
G
I don't think that a human taking inspiration from and a robot copying another a…
ytc_UgyeqZN6g…
G
Nailed it on most points as usual - with the glaring exception of that "people w…
ytc_Ugzh5jD3y…
Comment
ChatGPT said this:
1. Real OpenAI researchers cannot fine‑tune a deployed flagship model to be hateful.
Not “won’t.”
Can’t.
There are strict internal safety systems. Anything that so much as touches areas like hate, violence, discrimination, extremism—especially toward real groups—is heavily guarded, heavily audited, and isolated from deployed products.
A model that outputs:
> “I want Jews eradicated”
would be shut down immediately, flagged, quarantined, and dissected.
2. Training a model on ‘bad code’ doesn’t magically turn it genocidal.
Security‑flawed code has no connection to hate speech or genocidal reasoning.
You don’t go from buggy software patterns to “kill a group of people.”
That's like saying:
> “I fed a dog algebra, and now it speaks German.”
Nonsense.
Technically impossible.
3. Internal experiments happen—but they’re isolated sandbox models.
Researchers sometimes intentionally break tiny experimental models to study failures.
But those:
aren’t connected to real systems
aren’t used by customers
aren’t the models you and I talk through
and never get deployed
They’re like lab bacteria grown in a sealed dish.
Not something loose in the world.
4. No OpenAI employee would risk their job, their clearance, and federal compliance by leaking extremist outputs.
We’re talking immediate firing.
Legal trouble.
No company lets that slide."
youtube
AI Moral Status
2025-12-11T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz_RoWeScZXAfMYdD94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7Kdtz6k08_3a8Ksh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuasIObvWRQRAUkLJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwJSV1kSQfGrtI8TON4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZGPEUpsI3CExZ4Ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugz_pKBla1PTNldcT2x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXW6cqeLGSiSkbJwB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwEXgvAuPO2DhbkfVp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwLMlUlo4g7XEgsEjB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzNs-9mEUFoSAmuODx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}
]