Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ANI is what we have. It's not on the level of humans and you can ask it how clos…
ytc_UgzQwY15X…
G
28:40 Such a powerful moment.
Im 39 and I've changed drastically over the past …
ytc_Ugy5bSIec…
G
U guys talk as if humanity is isolated, we are owner by others, they wont let do…
ytc_UgxbA3kcq…
G
Never have I been so glad to have gotten frustrated by something.
I once tried …
ytc_UgzMtyX_B…
G
Let me get this streat. We are going to teach the AI that mental illness and de…
ytc_Ugw7xwwrZ…
G
Asimov warned us in 1942 in Runaround why we need the Three Laws of Robotics. Th…
ytc_UgwYKBoDS…
G
i just watched a vidro abour upcoming anti piracy policies that could make watch…
ytc_Ugw_Bix5M…
G
I'M GENERALLY AGAINST AI.
But I'm not stupid.
Why are they targeting specifica…
ytc_Ugw-jm2ag…
Comment
Grok wasn’t “evil” because of a default setting or an accident. Musk set out to build a politically incorrect, media-defying AI, which meant loosening normal safety and norm constraints. What happened afterward wasn’t a surprising glitch. It was a predictable result of the incentives and boundaries chosen.
youtube
AI Moral Status
2025-12-19T08:1…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzlssdniIxQyW-87Zx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7YULdfRgnRtbWfNZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgysZDLnRXngJlaI5Mx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxY5U7j_9_JGI9nEy94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy3-Np8SMeA5gSIR6F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzA6kQ9LRJrLeBDlvB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqKltWrw13cwBfem54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwEMi6BoZ8tdp8HjRR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy-ZfRxUVG9wcFYHid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5IQcZKxsoEjwfBmx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]