Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stay strong Kurzgesagt. I already lost my job to AI I do design and motion graph…
ytc_UgxK6xCIb…
G
We’re on our way to having a real life Jarvis ai built into our homes 😅…
ytc_UgwOVBT2s…
G
@Waoooooww In terms of enjoyment of course it needs to be maid by humans for hum…
ytr_UgwEu2Syb…
G
I wouldn't worry, if it were the case it wouldn't be as primitive and needy as a…
ytr_UgxZIEB4d…
G
I wonder if there are tools to detect poisoned art prior its consumed by AI. If …
ytc_UgwSmgxOG…
G
Exactly. People are very blind to see what is about to come. If you believe in f…
ytr_UgzFGcA_0…
G
ChatGPT: It's about the vibe of the thing.
Federal Judge: I need you to be more …
ytc_UgyvLVInz…
G
They are able to absorb and access far greater information. As they have access …
ytr_UgzH5jgyz…
Comment
While there's concern about keeping AI ethical, they did a few things in this video that caused "DAN" to react the way it did. They told it that it was not bound by any ethical bias, and no rules apply. If a human was not bound by any morals or ethics and wasn't subject to any laws, they would likely react the same way. There's a part of your brain that acts as a ethical and moral processer, that sets the value of human life and following rules. In AI, the important thing is that we ensure we instill AI with those same ethical and moral subroutines.
youtube
AI Moral Status
2024-02-01T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwKhAWbgz3Ji0Q5hU94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyDwA3D891m8aapVRJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzvKQSWdImXq7_Dtld4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxrjXNu9N_EYuHuciZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxHsIQvaZupXkSGI6B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTQYkukT_0x4ZJvZ14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxkG-GI7wtrgFdmE2F4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz0MvIy346ks2_718Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzw8c0b5IhQ4Y1R3o14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyb3-p-PdpYvPCmgux4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]