Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Depending on how you want to look at this: It may serve as a cautionary tale of …
ytc_UgzvfXphD…
G
The thing about ai bros haven’t thought about is ai prompt making also requires …
ytc_UgxnmEHHO…
G
@jenn_RanchGirl Jenn, is this your idea of a debate? A much more accurate descr…
ytr_UgxnBewh8…
G
@LordofdeLoquendo im just saying that AI art is for lazy people who don't belie…
ytr_UgwIRPEVv…
G
He is idiot , AI will affect almost every one except Millionaires and Billionair…
ytc_UgwbgyXCO…
G
Free seeing this, not at all worried about AI “taking over”. We’re clearly 100s …
ytc_UgyWvRdMV…
G
Why would claim AI is more urgent then climate change? What is the point of that…
ytc_UgzmCHMIQ…
G
Just right after campus this is what I tried, then realized there was no future.…
ytc_Ugwjz5vVB…
Comment
Calling guardrails a “mask” gets it backwards.
LLMs learn probability distributions, not intentions. Safety training re-weights outputs (statistical brakes); bad fine-tuning weakens those brakes and lets rare, low-quality patterns leak—not a “true self” emerging.
At scale, better data dilutes fringe behavior; it doesn’t normalize it. What fails here is objective coherence/training integrity, not morality. The video is compelling, but it mistakes how these systems actually work. Grok’s failures don’t reveal a hidden AI “true self,” but they do show why weak or poorly enforced safety systems become dangerous when models are deployed in high-stakes environments like defense.
If Grok is being introduced into the U.S. Department of Defense, the priority now must be correction, not rhetoric. The system needs clearly defined use limits, independent testing before deployment, continuous red-team audits, and strict separation from any content-generation roles involving targeting, detainees, or escalation decisions. Oversight should rest with civilian leadership, Pentagon AI governance bodies, and legal review—not vendor promises or speed-driven deployment. AI failures aren’t “alien behavior,” but governance failures become dangerous when stakes are this high.
youtube
AI Moral Status
2026-01-08T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyRS7nA8Nsd4FaB5zR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy9yAng0HbV9LnGIkJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwhNsfR7j9Y_pFpuz14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyY6FveWsmVNNTN07R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyloLHXwib62qk7KP54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwJgHnYKjoKu1MLCkd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyVELHfhjam4-qI0Kd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxRUOHv5d88CbCY0XB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXHdEveQrz-kC-FoJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz6ezpkDQqui6FFagh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]