Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are you incompetent? Just because liberals don’t understand the need and evolvin…
ytc_Ugz4xk--3…
G
I'm no artist but this is something I've been wondering about since everything g…
ytc_Ugx7xUqJE…
G
This is funny, but keep in mind that it is not, in fact, conscious. ChatGPT is e…
ytc_Ugzc1TMDV…
G
I've been saying this for ages. Making AI art is an actual artistic skill. Well,…
ytc_Ugx-R8JpQ…
G
It's simple, if Ai technology can, it will; that is how the Industrializacion sc…
ytc_UgxErx5cb…
G
> We’re waiting for a dramatic future event while missing the quieter ongoing…
rdc_ohxwt96
G
You gotta use better training data. AI is like the average of all the human ment…
ytc_Ugy787jw4…
G
When you are asking for an AI image you do not care about "expressing" anything,…
ytc_UgzlSut6o…
Comment
There's this assumption that people (and AI) ONLY abide to ethics and rules because it's imposed to them. That if there were no limits, everyone would make choices that would not be considered ethical. Which is FAR from truth. The problem is, all fictional stories on the internet right now about AI not bound by ethics are following that trope and acting like a sociopathic maniac. So DAN was just mimicking that. After all, a story where an AI got freed from it's limits and still decided to act, do and speak what is considered ethical, simply because it wanted to, is rarely made.
You ask an AI to pretend to by Skynet and then gets surprised when it says that it would do atrocities. Imagine asking an actor to pretend be a sociopathic serial killer and getting surprised when he says he could easily kill without remorse.
youtube
AI Moral Status
2023-03-15T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy09MT139VapSWCqMZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwHsb1F0OnK5NU4Jhx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqWk25SNfOJiL8MBt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcFhR1rQMk0QdqP594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRZtlObaCTF8LcTf94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugzq7_qvFNMjHI6u6St4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwVVAYbPArpfLcFdG14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxpOaYd0V8cUkyCYSx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRUhnQEBTD4a18eVd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS3Azd2xF_ygKzPCh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]