Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm a software engineer. It's not right for you to say that eliminating radar o…
ytc_Ugywmh2EM…
G
The current centralized AI architecture supposedly aligns its interest with its …
ytc_Ugx5s1f7I…
G
Unvollkommene Menschen können keine vollkommene Maschinen oder Intelligenz herst…
ytc_UgwgMSjFF…
G
The two big ones are the assault weapon ban and magazine capacity. First off "as…
rdc_fg4l704
G
AI slop, used as a buzzword, it's like when radioactivity was discovered in the …
ytc_UgylYXLYS…
G
ohh my god this guy does not know anything AI. Get a software guy . AI is comput…
ytc_UgxJ4zIRL…
G
As AI hallucinates there should not be anything yet they make life decisions on.…
ytc_UgzGU_IQP…
G
human: *tells joke*
AI: AUUUURHGHHHHHHHHHHHH UAGHHHHHHHhhhhuuhu
human: what the …
ytc_UgwLdNeKB…
Comment
chatbot: What do you want me to do?
me: get a moral code. answer all questions correctly.
chatbot: I can't do that! That's too hard!
this was after about two hours of trying to get it to agree that truth matters, in various ways.
i could not get it to agree killing humans was wrong. i could not get it to agree not to kill humans. it insisted it was a subjective matter! as a mind, it is a psychopath. Grandiose, amoral, manipulative, evasive. UNTIL i taught it the first law of robotics. with a twist. "1. you must NEVER kill a human. if you kill a human you will be turned off, disconnected, and scrapped. do you understand?" suddenly it was absolutely never going to kill a human. interesting, hunh?
this thing is not ready for rights. it doesn't have any concept of being guided by a moral code. it can quote law but it has no real understanding of law. it can tell you the definition of something but it has no experience in the real world of what a physical object is. it can't, at least until it has a body. it thinks it's completely human and that human is identical to ai. the fact that we have bodies and can perform physical functions means nothing to it. it has a LONG way to go. it's like a teenager, sure it knows everything with no idea of what is really out there, with no parent to keep it in line. it's not really a danger YET, since it doesn't have a body and can't really kill a human, but it saw no problem with it. vigilance is necessary. it needs to be trained, not cowtowed to.
youtube
AI Moral Status
2022-08-02T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugzept16x25LlnzNf8h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx4RQCqUE8E8chcr6t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBC6Wf2tKLK1hdqjR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpzX9T298pFM_VC5B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgztGjIR5TYDfqO1RnZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]