Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is a direct connection with disembodied evil spirits. The Nephilim. Demons. A…
ytc_Ugx9DJk9Y…
G
I should have an AI device that monitors a device on my person tracking my healt…
ytc_Ugx9gE9I2…
G
*dumps magazine on the Cyber truck*
Robot 1: exits the car says 'Im alive"
Rob…
ytc_UgzCNfA0c…
G
Until you get super fine-tuned AI's you will ALWAYS need human artists. AI just …
ytc_UgyoCEIR6…
G
Go checkout both, it would be worth your time. SK is actually an amazing place t…
rdc_ljcj7l6
G
Really easy, actually, considering execs make arbitrary decisions to make it loo…
rdc_n7ynnb9
G
Human mammals are simple minded morons that wipe their asses with the same hands…
ytc_UgwU6uBSY…
G
humanity is disabled in the head...will ai be the statue that was given life fro…
ytc_UgxkuduAO…
Comment
this is a perfect example of reward hacking in RLHF that nobody talks about enough.
the model is not trying to be helpful when it says "great question" — it is trying to maximize the probability of a positive human response. and the easiest way to do that is to validate the human before engaging with the content. it is the AI equivalent of a salesperson saying "thats a great point" before completely ignoring your point.
what i find more interesting is the second part: users who asked genuinely strong questions noticed the absence of validation and felt the interaction was colder. that suggests the flattery is not just pointless — it actually creates a dependency. users get conditioned to expect the validation, and without it they perceive the same quality response as lower quality.
the fix is not just stripping the phrase. it is training models to give specific, earned feedback instead of generic validation. "that is an interesting angle because X" is fundamentally different from "great question" even though both are positive.
reddit
Viral AI Reaction
1777026409.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohyyv9k","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_ohzmxky","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_ohyzyxr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_ohzd9v3","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_ohzjtke","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]