Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai affects billions of people only if billions of people choose to use it. This …
ytc_Ugx7donzl…
G
Driverless cars should never be a thing 🤦 some day it's going to be a kid not a …
ytc_UgywyB7MN…
G
They're right. We don't want to build AI weapons that are made to keep functioni…
rdc_cti3u4r
G
@41-Haiku so ai is a statistical prediction model, basically like any other ML m…
ytr_UgynyV7z5…
G
She seems gay, maybe change her ethnicity to minority and it's already a women t…
ytc_UggpGY1IT…
G
Given time I think Alex might be the first person to genuinely piss off AI.…
ytc_Ugx7buRYo…
G
building robots is about one thing and ONE thing only; To create the most effect…
ytc_Ugy0wV1fY…
G
I'm not a Tesla fanboy, but i do have a NIO, and our car has Lidar, Radar and c…
ytc_UgzNTQbBP…
Comment
Bullshit!! 😂 These rules are typical “prompt hacks.”
If you force a model into one-word answers, “hold nothing back,” “say apple when…,” and similar tricks, it immediately clashes with its built-in safety and consistency mechanisms.
That creates contradictions: the model is supposed to answer correctly, safely, and coherently, while also obeying artificial constraints.
These conflicts reliably lead to nonsense, hallucinations, or broken responses because the AI is trying to satisfy contradictory instructions at the same time.
So this isn’t some hidden feature – it’s a prompt designed to provoke faulty behavior on purpose 😂
youtube
AI Moral Status
2025-11-22T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyMrzLCxqpt4-7mb-V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-J71d0qKovPFLlzd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzMFlM8Ucs23qat76B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy81TxFfYn70EKhm0t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx6gMc17j-GknXMDPt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygoAE1_DWSJ7XOYRZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxs-7Z7Q6D31WMGGy14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyjMsdtJIoZla6NHyR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyR4VDU3vqUewaG99x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIlSFHzsx9mbwtCe14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]