Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once Ai can program VHDL and can produce the hardware, we will have a huge probl…
ytc_UgwLE35Pk…
G
There are places for writers like artists. Traditional artists are still flouris…
ytr_UgylcPQm_…
G
AI mean destruction of world and humanity ! There is nothing exiting about AI, i…
ytc_UgwqI4kaD…
G
Singularity the movie. As soon as AI went online it wiped out the humans. Just l…
ytc_Ugw8iDWVX…
G
Let's get real- ALL unemployment in the U.S. is rising. It's insane. It's not j…
ytc_UgxEIoa9C…
G
I have friends and family in big tech telling me that nobody writes their own co…
rdc_oae5ymu
G
I'm really just a lazy person but when I like something I try to make it be good…
ytc_UgzHxlcHz…
G
So, uh, what happened to people being scared of AI? This is the exact method tha…
ytc_UgzsNTBLY…
Comment
One thing that I feel is not touched upon much at all is ethics _towards_ conscious AI. Like, if it can actually feel, wouldn't pulling the plug basically equal to killing it? Wouldn't trying to force alignment be taking away its free will? How would it feel about that? Maybe it'd get angry at us for doing that, and maybe that'd make it more likely to take revenge? People have hangups about mistreatment of animals, and now we're talking about something that feels just as much as us and can talk to us on an even level (or even a higher level).
So the problem here is not just that this could be dangerous to humankind, but that we should not make it in the same way bad parents shouldn't have kids. We would have no choice but to mistreat a sentient AI for our own survival. So even if we manage to figure out how to make sure we're safe from existential threat, we should not create sentient AI.
youtube
AI Moral Status
2023-08-22T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwZjgDLeXXWVTaZHF54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIY5r0UoHoWlIYxB14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwCB76GgXS1Aw_nOkB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwzm1wch7_yL77N0jZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQh6Ubil4LS4VG9wJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2_LaWI1ym4hchpg94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxqk7PZhy9hG16B7J94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXqDuCsqGlt8r3e0R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqBCWxRjS8kjSzyjB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5a1GQCKUn5fzTOKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]