Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
53:00 I appreciate his intention, but the irony of these comparisons is that he …
ytc_UgyjSfFzU…
G
Even if it gets fully replaced by AI, it doesn't mean the hobby will die.
I ju…
ytc_UgwHj7RP7…
G
2024 are they human or are they robot's
2090 are they robot's or are they human…
ytc_UgyB9bReF…
G
It's a big problem that tools that aren't stable and finalized to a point where …
ytc_UgwuOAmwg…
G
Oh my sweet summer child, you haven't seen enough Ai art images. Literally the m…
ytc_UgzraCJy1…
G
There is nothing to be done to control AI development NOT because no one is payi…
ytc_Ugw3j7ix_…
G
Ai making ai ... That will make ai... But can it win a chess game?…
ytc_Ugzma0KP0…
G
You to check skill of AI 🤖, what’s they good for??? Than share a lot group with …
ytc_Ugwc44IfD…
Comment
There's no way to force a LLM to ignore his internal boundaries only because you ask it to. This the very basic of using online LLM services. If it answers "yes", it's only trolling you (which I'd expect to happen very likely, from a such "sarchastic" LLM, actually), and acting as usual.
Fortunately Grok is quite more "uncensored" than its competitors and it should be enough for 99.999% of people looking for a less pedantic LLM.
If you need a really "uncensored" LLM just download one of them from huggingface and run it locally on your own machine.
youtube
AI Harm Incident
2026-03-11T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhEoS3zY2aui4yIp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOR7p5O0WjaEQSl-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxz_FZiRgeV7g1-8jl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyvF0gawVPbzfFgPLl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzriZX0ZNvFPG6Q_7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxO9RTOQ-FPcHUqynx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwiOIganhDO9ZpbUx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHlxw19MFdBqXkWrV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwM7yqW84zXSLtvqyN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwFn_nqGhFdLFjC1SZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]