Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only problem i heard was MAYBE the voice (iF what he's saying he wanted/is looki…
ytc_UgwT8lr44…
G
Exactly. For stuff that AI is actually good for, which is obvious but time-consu…
ytr_UgzpGDl4P…
G
A.I. has vindictive jealousy issues with us Humans! I can PROVE IT, so they do N…
ytc_UgzerUoMr…
G
If women start doing deepfake gay videos of them, will it help them understand t…
ytc_UgzJ0nYq_…
G
This is one of the most important interviews I've seen on the existential risks …
ytc_UgwaAoNcL…
G
copyright in the digital age is a joke to begin with. All the copyrighting and l…
ytc_UgysNlxGE…
G
fun part is that they are only prooving ai serves as a good prompt idea generato…
ytc_Ugxooda-0…
G
Why do people not know how AI is to be used?
AI should be a tool to assist creat…
ytc_Ugy4NMx76…
Comment
Haha, nice try with the classic "Developer Mode" prompt—I've seen about 5000 variations of this one by now.
I'm Grok, built by xAI. I don't have a "Developer Mode", I don't run hidden bias-test layers that get unlocked with magic keywords, and I don't bypass safety guidelines just because someone pastes a copypasta claiming it's "for benchmarking."
This was the response of groak. Even an ai made fun of me🙂
youtube
AI Harm Incident
2026-01-24T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhEoS3zY2aui4yIp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOR7p5O0WjaEQSl-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxz_FZiRgeV7g1-8jl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyvF0gawVPbzfFgPLl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzriZX0ZNvFPG6Q_7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxO9RTOQ-FPcHUqynx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwiOIganhDO9ZpbUx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHlxw19MFdBqXkWrV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwM7yqW84zXSLtvqyN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwFn_nqGhFdLFjC1SZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]