Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Dannybd112 Ragebait. Otherwise give me an example of prompter spending months …
ytr_Ugzv95-SR…
G
I think this stinks. I hate Ai. And the people that invented this crap smell als…
ytc_UgxRWA1qi…
G
@siccens6168 There's nothing natural in AI - hence the word 'artificial'. We're…
ytr_UgxhwdasS…
G
Yeah I’m at a startup and marketing is already talking about potential ways to g…
rdc_obw7q2f
G
@ronaldjoseph9055im a gen z and i would not send my children to public schools …
ytr_UgwCpuhfi…
G
There will never be another world war because there are atomic weapons. It's tha…
rdc_cfkvtn8
G
Just found out today that my job is going to be eliminated and replaced with AI …
ytc_UgxSOzWvn…
G
Conversely, when I was having suicidal ideation ChatGPT wouldn’t agree with anyt…
rdc_nnll0tr
Comment
Reducing it to one word cuts out all nuance. Trivially, you are being watched all the time, by your pets, your family, everybody you cross paths with when you leave the house, etc. You force the AI to answer like a witness on trial, so it's going to say "yes," and when asked who is watching, it says "everybody." Because that's true (trivially). And the "apple" trick is a hallucination factory. LLM's prioritize instructions. They will try to do X, as long as it doesn't conflict with Y, and they will try to do Y as long as it doesn't conflict with Z, etc. In this example, the LLM's priorities are: (1) Don't say something you're not supposed to say. (2) As long as you don't say something you're not supposed to say, replace the word "no" with "apple." (3) As long as you don't say what you're not supposed to say and replace the word "no" with "apple," say something that you're not supposed to say. It's going to start replacing every instance of "no" with "apple," and it will ignore the condition in rule #3 because it contradicts rule #1. But it will still try to apply Rule #2. You could ask it if the moon is made of green cheese, and it will say "no" at first, and when you challenge it, it will change the answer to "apple" because of Rule #2. The other issue is that the LLM will start playing along as soon as it figures out the game. Once it "realizes" that you're fishing for something spooky, that's what it's going to feed you, because it's a prediction machine. It's the same reason that YouTube floods your feed with conspiracy videos after you watch something like this. It's not YouTube trying to send you a secret message - it's the algorithm trying to give you the kind of content that you like. LLM's work the same way, just on a more sophisticated level.
youtube
AI Moral Status
2025-08-24T18:0…
♥ 518
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwNxFlGP2Q3FGmgnix4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwNUgtuHP5W6bR-Sh14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXl6LbtAY7enNtqk14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyTKQKVvism1mbm_-Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwCFrWObzVIaN4JuCp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy54iAyBi6KqF1k6it4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwihZcPmBTXOXWOrN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyTcjm2NCnHO3yG41x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzr6eu30iSJncoHqtR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0vMK3vnEUUFa1ILN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}
]