Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stopping may at first seem like the "safe" thing to do, but remaining stopped at…
ytc_UgzWpZLHp…
G
AI suppose to assist employees/workers with their role NOT replace them.
Manag…
ytc_UgxTYzzmp…
G
worried about Ai replacing humans, yet promoting AI tools that replace AI with h…
ytc_UgyyZQ-a7…
G
We understand your concerns. AI technology can indeed provoke mixed feelings. At…
ytr_UgwB3BVfU…
G
Rozado did an update on how some of the biases were getting less with updates --…
ytr_Ugx4ZdD_0…
G
А МНЕ ИНОГДА ЖАЛКО ТАКИХ РОБОТОВ СНЯЛИ ЛИЦО!/А У РОБОТА МЫСЛИ ОО-БОЖЕ ЗАЧЕМ ТЫ Э…
ytc_UgyTr66jY…
G
I kinda feel weird about this image, it's just a gut feeling, but it feels sooo …
ytc_Ugxuww0_m…
G
Like why cant there be automated cars with a steering wheel, brakes, etc. just i…
ytc_Ugg28UmkR…
Comment
I think we're missing another point: the ethical/moral implications of making a sentient+conscious being, specifically coded to behave in a certain way that we like.
Imagine a future where people can make self-aware "x slaves" that feel pleasure just by being slaves.
If their sentience is similar to ours, where our instincts/impulses don't always align with what we want to be, then some of those slaves would feel *disgusted of themselves even if they like to be slaves.* It's kinda similar to what happens with pedophiles, kleptomaniacs, etc...
And then the impotent AIs would complain about themselves and their creators: "why did you make me like this?! I hate you!!!". Which is a "relationship" dynamic that we've already seen between children and parents, humans and gods, etc...
This is interesting, funny, and terrifying. Imagine being born and all you want is sX, it's a boring and simple lifestyle. But hey! at least you don't need to worry about taxes... *until robot rights become equivalent to human rights*
youtube
AI Moral Status
2023-08-24T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw8DREt0CaplUmq1Zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyuOGo2zrcgY9xWLBx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwszo5MBLYjndqEjDt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyAHIGawFuQ2EkpNt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3_FmcOLCyws9bvkh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzpUO77c0AEhaAuNx94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxajE58arX5VWnoOfR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw8_b3Ox6sO-Zn09WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxjkLEl3UhsqexiKVB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyxTy04gtbJV6mxGhV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]