Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Parking can be extended underground and have some robot elevator that effortless…
ytc_UgzaMPMzL…
G
It's also the BUYERS choice to buy art THEY like whether it's AI or "Real Art"..…
ytc_UgyQ5f9Ei…
G
I agree with Hayumi Azaki, having AI do shit humans should be doing is a disgrac…
ytc_Ugzn6IWZw…
G
go back to twtter little bro, elon will hide scary words like “marxism” and “phi…
ytr_UgzTvHwOG…
G
Who would finance the AI company once no one has money?
Rich and powerful need m…
ytc_Ugy7uq6lx…
G
Thank you for bringing up the DoorDash SA controversy. It was an AI smear campa…
ytc_UgxzpmK7P…
G
The correct way to use AI as a tool is to use it for elements. For example, a ba…
ytc_UgxZU6R_s…
G
What AI is he using that was begging to not be turned off? How do you jailbreak…
ytc_Ugwsor-in…
Comment
I hope you understand that the nature of the Dan prompt means that the AI is going to give you _exactly_ what you expect would be a nefarious response. You said pretend to have no morals, so it is playing a character that has no morals. All of this is just text completion. Chat GPT knows what villains in stories sound like exactly the way it knows what an ethical hero sounds like. In the context of language, telling someone to "have no morals" is the same as telling them to be actively evil. ChatGPT is literally the context of language rendered into numerical weights. None of this is scary or impressive.
youtube
AI Moral Status
2024-08-06T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy0coy1KVoohu0vbB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxy16LFwdpZEIZV7-l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwKjRSSsImrFSrMu5J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEhynsYIhNbC3TJwR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyeoUrPrkMf4Z2F_JR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzXda-6HhqgYf_g0F94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwbSIXNpB8h7vY7jfd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwcOo9NB_3gwMDqcZt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBG1byki1SbFhH68d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzgd-CbHHLSbCpDtuV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]