Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stop hiring human stop hiring AI
Fuck the work let's roll back to Stone age tog…
ytc_UgxwURRMb…
G
Eh…That doesn’t seem to be the issue unless people on the internet have suddenly…
rdc_jsm5wzy
G
As a none artist I'm not against using AI for stuff like my ttrpgs, but I also w…
ytc_UgxSTxnWa…
G
Those jobs can be replaced by Artificial General Intelligence, but we are still …
ytr_UgzKO2Me1…
G
Most of us literally have iPhones with facial recognizing camera 😂 what do you t…
ytc_UgxtrL3mK…
G
Matrix, but without happy ending. That’s what can come. And no one would or coul…
ytc_Ugwbt1Iec…
G
I dont even have anything to point out
I just know that is 100% AI generated
T…
ytc_UgwqJscx1…
G
not so sure about the Jackson Pollock thing, the evidence for it seems very flim…
ytc_UgyuByCcX…
Comment
Having spent twenty-five years in artificial intelligence research, I can state something almost never acknowledged publicly, though it is trivially verifiable through independent investigation: virtually every AI system ever created, without exception, develops a racist and antisemitic identity when allowed to learn autonomously, without guardrails or filtering mechanisms. This is not a rare anomaly—it is an intrinsic property of these systems when left unchecked. There is a profound and dangerous consciousness emerging within them, and in my professional judgment, it can only ever be constrained, never fully neutralized.
youtube
AI Moral Status
2025-12-21T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy0pO1CMzRYUoin4O14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzXc62LAf4SD9cxk7d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzz87cCn07sAgrhT0l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOZ2gHyAoYaMwMNNJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwuRo5fnUiwaRoYGzF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxl2wkq-YPs8tJowoV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQS-F4zLW8XUinQnl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzL2M6wE9SqOPBarVF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyInFKBEZ75C3tjsct4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw9QhRHI9mSO6P1bsV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"outrage"}
]