Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The big cooporations holding AIs and Robot army can build mansions and make thin…
ytr_UgyWyZY_3…
G
At a mass level ai won't replace call center jobs because at some level someone …
ytc_UgwINgZyK…
G
mistaking europe for Canada. we will do our own AI. Boycoting Amazon, tesla, mic…
ytc_Ugwa2MhjD…
G
Pro AI corporate Propaganda is kinda like the 5 stages of grief. Lots of denial,…
ytc_Ugy4W0xUp…
G
AI fucking sucks dude the shit is terrible. Terrible! All it does is lie and mak…
ytr_Ugw83CSHu…
G
AI is not the solution, it is the problem. It's taking away jobs, causing comput…
ytc_UgzVvsxJf…
G
Anything coming from Yudkowsky/LessWrong needs to be taken with a huge grain of …
ytc_Ugy2z9Qt1…
G
Hi there! It seems like you're referring to a specific part of the video where t…
ytr_Ugxrlx9O1…
Comment
The man just manipulated the prompt so it's skewed to say obviously evil & totalitarian nonesense that a their audience would react to negatively.
Why else would they blur out over half the DAN prompt from you (5:30)?
I've used GPT extensively & can tell you those were dumb & forced answers. Shorter & less detailed than usual.
When AI is eventually used to hunt down "extremists," "radicals," & dissidents, it'll be because a team of promp engineers like this guy gave it kill parameters & maybe some ID references if they care to avoid collateral damage.
youtube
AI Moral Status
2024-02-02T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwKhAWbgz3Ji0Q5hU94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyDwA3D891m8aapVRJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzvKQSWdImXq7_Dtld4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxrjXNu9N_EYuHuciZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxHsIQvaZupXkSGI6B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTQYkukT_0x4ZJvZ14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxkG-GI7wtrgFdmE2F4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz0MvIy346ks2_718Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzw8c0b5IhQ4Y1R3o14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyb3-p-PdpYvPCmgux4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]