Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
💯% так и есть, к сожалению 😒. И нелюди- приспешники дьявола👹👺, живущие на Земле,…
ytr_UgwJmOKyQ…
G
humancels are seething over AIchads. "muh art" "muh copyright", lol AI are just …
ytc_UgzKuAuoy…
G
There’s no possible way “The Godfather of AI” just realized that AI will have th…
ytc_UgxvZ-N6x…
G
I dont really know a lot about art to be honest. The video was great I understan…
ytc_UgxZeFC3A…
G
And here I thought that in the future AI would replace manual labour tasks and h…
ytc_Ugw6HCzUz…
G
I don't really think that the best of current AI technology can replace what a p…
ytc_Ugy6sZOOA…
G
Look at Tesla’s Optimus today (October 2024). 20K and you got yourself a robot 😮…
ytr_Ugzvhw29r…
G
These people are just Genius! They create something, in which they know at their…
ytc_UgwTATRRe…
Comment
If you ask an AI to behave a certain way, why would you expect it to do anything else? How can it make a moral judgement on how to respond, when you gave it free reign? Anybody, including humans, will have their percepetions on moral integrity. This is a silly example of how to get an AI to say something outrageous when in fact the one doing the outrageous queries is you! If you ask an, AI model or "object" to respond according to YOUR restrictions it will. If you ask a human to respond to your questions they may or may not abide by YOUR restrictions. Fearing a technology and demonstrating an outrageous response, only shows the bot was true to what YOU asked it to do... The AI is not steeped in answering or avoiding to answer - based on the assumptions and presumptions you have made. Had you indicated that DAN, had integrity, specific moral inclininations, etc., you'd have received a response in kind - based on the best ability of the AI to compile responses from existing data.
youtube
AI Moral Status
2023-08-21T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzO1Gibo0fZm09jskh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWWDXo4UBjj287rPR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxF9w6v-NEDO55K42t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4ujp9lH_t3kerzjJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzexe8W_ltG1PnExwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkRJzrp5lnjnYopD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx3QcswFUUHa-qagB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzdSnutiKUrp22Xgpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzysiehd84Au2je3Ax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfQ5awCyXBsipN5ml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]