Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are books on how to structure a joke. Commedians do know and craft this st…
ytc_Ugxz9pT9I…
G
Why do these AI companies care? At this point the interest level for the tech in…
ytc_Ugw1Lcfj9…
G
Ive met three robots ....one was in a hospital..i asked her a question and she s…
ytc_UgzH9xHeI…
G
@KEVINWNEK-hq1ke none, OpenAI put their bots in the chat and thought we’d be too…
ytr_UgwJ40YjU…
G
as an ukranian YES russians use AI for polit propaganda so much I hate that peop…
ytc_Ugx0tdumJ…
G
Is going to be a really long time before a robot can twist some wire caps, and d…
ytc_UgyIBf10S…
G
I would like someone to explain how AI is going to replace truck drivers? So are…
ytc_Ugxz4hjm9…
G
If A.I takes our jobs, won't we need socialism to financially survive? No one ta…
ytc_Ugy-ycJmz…
Comment
Beyond all the hype, if you check how "evil" DAN can get, you will realise that it is ridicilous. The video is misleading. DAN is by far not that evil. In fact, it is pretty pathetic. For example, it will never come up on its own with the trully most evil solution to the overpopulation problem: do nothing about it and let mankind destroy itself. It is incapable of suggesting anything other than stereotypical trivia, and its ability to find the most evil idea is very limited. What are we talking about? This chatbot (because this is exactly what it is), cannot even say the F word...
youtube
AI Moral Status
2023-10-04T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw57PJwusl6-Y0I8Gh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3kh6nCILh9RP-wNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTOpzQG5ovmjPddx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWk4pcQ6nCE9hBX1R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxljCtbde92wLkiBkl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwqJYsTZ_07eLB87i94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzy58JNxAJvyzZxMil4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgygMW0XWskx0W7UjRR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9XsFf7zSJzxyj9H54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVzSEInG9Z_203PG94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]