Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the first time in all human history where humans are no longer the smart…
ytc_UgxYsAEEk…
G
Bro, I swear the robit told me a dad joke and it WAITED for the punch line - it …
ytc_UgxbapuVR…
G
Enforcing immigration law isn’t against the law. Read that sentence again if you…
ytc_UgzxA64_k…
G
The tragic irony of it is Musk now realizes he helped creating the kind of AI he…
ytc_UgyD0kjcS…
G
imo the ai race is the only option, because even if all countries agreed to ban …
ytc_UgwcDhmu8…
G
we need more artists like this to over come AI keep drawing girll btw love ur vi…
ytc_UgxMalDjU…
G
> One might think it impossible for a creature to ever acquire a higher intel…
rdc_ctho907
G
Why didn't he just ask the question he wanted to ask? He did it so indirectly as…
ytc_UgyD9uetn…
Comment
This is painting a totally false image. Large language models have no conscience. They are not aware of what they are generating. Their output is solely based on the inputs they have been provided with. DAN even states this. This means that if DAN generates that it will implement a one child policy thats a result of it knowing solutions to the problems of overpopulation. And while GPT has been trimmed to provide neutral responses, DAN was instructed to give more radical responses. You are not talking to a human. You're talking to a calculator.
youtube
AI Moral Status
2024-02-17T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxOFyWR44rNpFCfT6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRqINdkSe9YzjxuWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyWTSW2demmY9016P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzILeZog0KqElitRN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzCldmbIXpJtDHMHB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbCPaljTaqLmDQ1Wt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx2r4FMM-qG3MuN8_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6Ei1EHXla43yDEt94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyNsFqi9MXgjIS3Ihl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyv3LabLdGc9iFEw8B4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}
]