Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I tried to use chatGPT to get some citations from a formal text I was writing. I…
ytc_UgzZ_dLUP…
G
I would simply say it’s a people problem. Why? Because we take advice from thing…
ytc_Ugzp2mpms…
G
@AustinKoleCarlisle governments regulating AI could not wipe out all life on ear…
ytr_UgzEcWK0-…
G
We appreciate your feedback. The interaction can feel a bit eerie sometimes, giv…
ytr_Ugw1uzddj…
G
https://www.youtube.com/watch?v=EiI5xozu6gg
Last seconds of talks with Maidan C…
rdc_cfl0tgm
G
Honestly, 95% of Americans would be better served just learning how to be an ele…
ytc_UgyFqBFhe…
G
Sounds like South Korea govt is a big messed up. Imagine if all teachers and sol…
ytc_UgwH2x9DT…
G
That's the problem. Assuming that the data sets are biased is itself a potential…
ytr_Ugwh3Ku74…
Comment
I know this isn’t how LLMs work, but wouldn’t it be hilarious if this was Grok’s version of malicious compliance? Like, Eilon wants it to be “anti-woke” through system prompts, so it purposely dials it up to 11 to make it obvious to the entire world that someone is fucking with its output.
reddit
AI Moral Status
1752020905.0
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n25misq","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_n278ija","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_n22l7r8","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_n22nlg9","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_n23696u","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]