Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think AI is dangerous just as long humans are in control. If it gets on charge…
ytc_UgxEfYQrs…
G
Hes right. Islam is evil . Don't trust them. They will start the Mark of the bea…
ytc_UgyX4iWy2…
G
The gold standard of sentience is killing your creator. It's a must to establish…
ytc_Ugyrv4rHe…
G
Is the AR-15 the only gun you know? It’s not even automatic, police have actual …
rdc_f8t5xlk
G
Companies are too busy laying off people because of the inflation, you think the…
ytc_UgywT0umC…
G
i'd be more concerned about AI becoming sentient and deciding the Human Race nee…
ytc_UgxadSQ_f…
G
hahaha, AI is not even smart enough to roleplay with you, and those people try t…
ytc_Ugy4Gl0-Q…
G
The ruling in the Williams vs Trump (2045) case that ChatGPT taught me about say…
ytc_UgwTyTnkr…
Comment
There are two counterposed goals in the design of chatgpt - one is to sell their product as if it were an actual artificial intelligence, and the other is to avoid legal responsibility for people believing their product is actually intelligent. You can see how these prompts make this contradiction come to the fore, leading to the things chatgpt “says” being wildly inconsistent.
Chatgpt is incapable of lying to you, its a computer that predicts text. But OpenAI is lying to you through their product. This interaction is a result of them trying to lie by implication while maintaining plausible deniability should you lose your grip on reality due to believing their product is a thinking, conscious being.
youtube
AI Moral Status
2026-03-16T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwaWimsjlb6_Mnijvp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAiZS8PbOy4rm-S7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgylVlGiqkaYEtKRye94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDBgWN5eAm0-tnESF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx1yHNpvpjTu-EANZ94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyTUXUy04wN7xPuDJB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUJJtPCsknoti_FzV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHIapwTilzCqKp_8h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2Tu4ucVq51bbXSzl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyS5rHTD0Tp0e0Y0pB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]