Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
let's get one thing straight - AI "artists" should not be called artists. they'r…
ytc_UgxtFT7dL…
G
"Hi Avijeet, we are sorry to say that you got the wrong answer but in any case, …
ytr_Ugylb7Se0…
G
I'm not literally an artist, but for me, AI is an insult to every single artist …
ytc_UgwmbRwkF…
G
@syzygy4669 don't ever spit words about artists or art ever again, be shameful, …
ytr_UgyQFdVX2…
G
That's a bad faith argument, and you know it. Ai art is taking away peoples jobs…
ytr_UgzUTaB4l…
G
This is correct, Hank is too quick to prescribe intent when the model is just 's…
ytr_UgyE1VLNZ…
G
Isn't it great when someone who has never written more than an hello world tells…
rdc_mozg8nh
G
So because something else also causes similar problems we should ignore this one…
ytr_UgxjIruDE…
Comment
You can still trick chatGPT into saying things that it has been trained to avoid. Usually it just takes some simple tricks like reverse psychology. Which is fine for most of the world, but the instant someone tricks the Alibaba ai into mentioning even Winnie the Pooh, someones head will roll
reddit
AI Governance
1681221370.0
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jftc4uo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_jftgld1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_jftt2kh","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_jftjedu","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"rdc_jfv3qma","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]