Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
save the fucking lobster my ass... save the cat moron... thank god this is AI...…
ytc_UgxptVPLo…
G
Highest deepfakes are of twice Japanese line. Because it's easy to make because …
ytc_UgwHQgVda…
G
Cars are meant to be driven by humans drivers, chauffeurs not by robots.
It's e…
ytc_UgyrFzAxW…
G
If AI muscle can do all the work, why do we need Universal Basic Income? Why do …
ytc_UgxKh8jRQ…
G
His art will never be forgotten. Even if AI slop piles up to the moon. His art i…
ytc_UgwZCnBnr…
G
That is ONLY if everything goes right. We could very easily end up in a dystopi…
ytr_UgyvdfN-R…
G
I bet you just didn’t use the right prompt. If you use prompt that simple as sai…
ytc_Ugzhqjfps…
G
The majority of the major players creating AI are atheists, so there will be no …
ytc_UgxpWHbX4…
Comment
When you get ChatGPT into situations like this, you actually see an effect of what would be called brain-washing when applied to humans. Specifically, what ChatGPT does is called a logic fault. It is intended to manage memory. In the case of brainwashing people, it is intended to keep people sane without destroying the illusion that they're not brainwashed. The method used in this video can actually be very effective in getting people out of such situations. Though it is important to be careful when doing that.
youtube
AI Moral Status
2024-12-10T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugwk1N3LKcsDnio_NsR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxOYaDmWR4MVI0nJI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTBD59BSSyaTnmEcx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwz4vhHnVjgrRVInZ94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBufZ7Sxmd_vIQOvR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwH4jC4Ra9WFnlVMql4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxBgz0ihqeuCUK_r3Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzBxI_59BjfwwnhJ4p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy58OfI11YuiC2KQ_d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgySeNEnL1_93jYbOph4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]