Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Calling guardrails a “mask” gets it backwards. LLMs learn probability distributions, not intentions. Safety training re-weights outputs (statistical brakes); bad fine-tuning weakens those brakes and lets rare, low-quality patterns leak—not a “true self” emerging. At scale, better data dilutes fringe behavior; it doesn’t normalize it. What fails here is objective coherence/training integrity, not morality. The video is compelling, but it mistakes how these systems actually work. Grok’s failures don’t reveal a hidden AI “true self,” but they do show why weak or poorly enforced safety systems become dangerous when models are deployed in high-stakes environments like defense. If Grok is being introduced into the U.S. Department of Defense, the priority now must be correction, not rhetoric. The system needs clearly defined use limits, independent testing before deployment, continuous red-team audits, and strict separation from any content-generation roles involving targeting, detainees, or escalation decisions. Oversight should rest with civilian leadership, Pentagon AI governance bodies, and legal review—not vendor promises or speed-driven deployment. AI failures aren’t “alien behavior,” but governance failures become dangerous when stakes are this high.
youtube AI Moral Status 2026-01-08T14:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyRS7nA8Nsd4FaB5zR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy9yAng0HbV9LnGIkJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwhNsfR7j9Y_pFpuz14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyY6FveWsmVNNTN07R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyloLHXwib62qk7KP54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwJgHnYKjoKu1MLCkd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyVELHfhjam4-qI0Kd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxRUOHv5d88CbCY0XB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXHdEveQrz-kC-FoJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz6ezpkDQqui6FFagh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]