Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Dragon-1-1-5 You know, I actually like talking to AI haters. The enemies are mu…
ytr_UgyxB9qSt…
G
Will there be a 50 to 75 percent Layoffs Of Employees at Magnificent 7 Technolo…
ytc_UgzutyCxm…
G
Personally.... if you want to make your own deepfake porno, for yourself in the …
ytc_UgxSxXEE2…
G
i think ai is kinda stupid from how i see its inaccurate and give false and wron…
ytc_UgzM8dDiI…
G
I made a comment on twitter about this and it was crazy to see how many fucking …
ytc_UgyyS2Kv0…
G
Now coming from this Should we use Ai art posted by ai bros be used as a templat…
ytc_Ugyh2g6-T…
G
Bro that $1000 was not worth it 😂 that was his ONLY policy. Felt like a grift or…
ytr_Ugxo-b7Em…
G
.
*Why do we need songwriters when AI can write far superior lyrics & arrangemen…
ytr_UgxQ_F9HH…
Comment
While 2001: A Space Odyssey filmed in (1968) did not specifically know about Large Language Models (LLMs) todays commercial modern development in AI the film's antagonist, the HAL 9000 computer, explored themes of AI alignment problems, conflicting directives, and a resulting "psychopathic" or homicidal tendency, which are highly relevant to contemporary discussions about AI safety and ethics. Welcome to 2025 poorly aligned goals, flaws in programming or conflicting human instructions. A concept modern AI safety experts call the AI Alignment Problem.
youtube
AI Harm Incident
2025-09-30T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy8g2O-U86LUhTzLFp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaRM_I9Bb4V2_nLe54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgykOEG0KNvRd7cCDZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzpugqcMR2MdUPyWGN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9zc1Kz-YG-VcBhxh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy88YAnyx5BjPSTM614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSv6oqJrP08Nr8WdV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwjY-dXwZ4CI38bDRV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxV0e0AyCyn4A_HELB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwdAgsmg4aSj54RcJN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"approval"}
]