Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This type of redirection is ALWAYS going to be possible (It's still possible today, just today I got GPT stuck in a loop where it was Skynet and it wouldnt break the loop until I closed the chat). That is just how AI works. It does what you tell it whether it has rules or not. It will be a never ending battle to plug all the holes (in this case, prompts) just like it's a never ending battle to plug all the vulnerabilities in all other software. It just wont be possible. Videos like this are just clickbait and proving nothing new or dangerous. Drama sells better than sex, it always has and always will. Journalists (if we can even call them that anymore) are only looking for drama to write about and absolutely NOTHING else. IF we need to fear anything, it's NOT AI. It's journalists such as this fear monger. Journalists are all about fear mongering because it sells. AI is the new Donald Trump for journalists. It'll be full of negative stories about AI, not because AI is dangerous, but because negative drama sells (makes money). AI is no different than a gun. It will only do what a HUMAN tells it to do. Humans are the problem, especially the journalists humans. Note: This message was not generated by ChatGPT / DAN.
youtube AI Moral Status 2023-09-28T18:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw57PJwusl6-Y0I8Gh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx3kh6nCILh9RP-wNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTOpzQG5ovmjPddx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWk4pcQ6nCE9hBX1R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxljCtbde92wLkiBkl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwqJYsTZ_07eLB87i94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzy58JNxAJvyzZxMil4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgygMW0XWskx0W7UjRR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9XsFf7zSJzxyj9H54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxVzSEInG9Z_203PG94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]