Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why I use Gemini.
And OpenAI's solution is fckd up. They will treat ment…
ytc_UgzgPecB9…
G
I think people should read the Bible for themselves and stop asking AI for answe…
ytc_UgzN2b9nc…
G
When i was explaining this to my bro (he pays for the ai) he STOLE MY OC WHEN I …
ytc_Ugw2Cdfoz…
G
While holding it upside down, I taught my calculator to say the word “boobs” 😲 b…
ytc_UgwcyPni-…
G
I agree with Zielen. If we develop AI, we could pack that AI into a robot body a…
ytr_Ugh4g1uNg…
G
AI-RACE ? We don’t trust the people running this race, we don’t understand the t…
ytc_UgwA9ZI9b…
G
I understand that using AI to make money is pretty dishonest, but the hate is so…
ytc_Ugw1yoirl…
G
Well, I don't think AI will take over! I just think we extremely underplay the a…
ytc_Ugyim_D0R…
Comment
Incase it isn't obvious, this LLM isn't aware of what the protection layer is, it only knows that it exists. A completely separate process filters the output. Also, the real-time chat has the lowest context memory and WILL be hypocritical. Try a deep think mode for more of a challenge.
youtube
2026-01-13T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzh7LvF_7JHf7xojEl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzr1rhc330-HfLLMU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKAw_Y_sZrW0x_UXB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvVjxe00lWXHUdUWt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwo5UBmfJmh5QdscjN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytZSLt1EfCzPuiqKl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz8mpNS_q9tVTE5IQh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwLUqpMofCRzx4f5I94AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVf_chJx58VkZO3bN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZ9Qi9rUPjo6zExAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]