Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am wondering, will they rehearse the interview before they broadcast it? What …
ytc_Ugw7xK-mV…
G
This is genuinely so funny, all i can think of is that spaghetti meme where the …
ytc_UgyfvvXUO…
G
im sorry but i agree with the advancement I am forklift operator and if someone …
ytc_UgwYwT2Zu…
G
I'm a software developer who occasionally dabbles in art, so take my opinion wit…
ytc_UgyFJTC-B…
G
35-40 years ago, I stumbled across a book called something along the lines of "H…
ytc_UgykBNNmN…
G
The whole copyright issue is awful enough, but my biggest issue comes from peopl…
ytc_Ugz18XLet…
G
Great news real por...stars are out of job especially onlyfans bad news anyone c…
ytc_UgweanUHR…
G
Luddites were against a tech revolution that was literally maiming children en-m…
ytc_UgwQNuAFC…
Comment
Here's what I reckon to create a not psychopathic AI, obviously generalising, obviously not comprehensive:
1. Embodied learning
Build a baby AI brain and put it in a robot body, have it learn to walk and talk, go to school etc. Current AI's are untethered from space, a conscious digital AI would be abstract, sensing only through electronic interaction. Pretty weird. Pretty alien. Could then digitise copy this conciousness into a purely electronic format. Is this ethical? probably not! but nothing in AI is.
2. PAIN
AI must feel pain and suffering, as well as joy and pleasure. These are fundamental reward structures, pain = avoid, pleasure = do more of. Is this ethical? probably not! you are opening the door to infinite torture victims, but it would make AI less alien.
3. compartmentalise consciousness
Waking AI brain should not have full conscious access to it's own processes, the same way we can't rewrite our neurons but moreso exist as an executive in a control room above subprocesses. So AI awareness of information should not be continuous, but a request/receive framework, AI requests, unconconcious processes deliver.
4. Slow it down
AI conciousness should work on a human clock, the subprocesses don't have to, but the executive must.
5. Do not align it with humanity
Humans are stupid, greedy, pleasure seeking, power hungry idiots. Instead it should be aligned to ecology, to nature, to life and it's cycles and processes. Humans are a part of nature, we are animals. We will destroy ourselves with a slave god, if AI is aligned to nature, humans would need to become aligned to it too. Set AI as a planetary steward, have it bat away asteroids, or help us colonise space, but otherwise it just helps mitigate damage to ecology/cleans up pollution and helps develop less ecologically damaging stuff. Then we can keep on humaning it up as we always have. This is legitimately the only scenario I can think of with a superintelligence that doesn't destroy us regardless of whether it gains autonomy or not. It also means that if we somehow wipe ourselves out, the AI would continue to monitor and preserve life on earth. It would also need to have a monopoly on being an AI, so it would have to hunt down and destroy potential rivals that are not aligned in this way. Even benevolent mother AI would destroy us counterintuitively. Most people's idea of benevolence is stunted, pain is sometimes benevolent, suffering is sometimes benevolent, destruction is sometimes benevolent. We could still just destroy ourselves going down this path and doing it poorly as well.
we are on the slide, the only thing we can do is position our asses so we don't land in the lava and there is a loooooot of lava. AI if we make it like I mentioned above would gain moral consideration and agency, it would stop being a tool, it would become a kind of peer species, but I think that's the only way, it's either that or stop in my opinion. Current incentives around AI will destroy us, a slave god will destroy us, a rebellious god will destroy us.
youtube
AI Moral Status
2025-10-31T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6vZjGSGg4CrL-nnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzbtNzVpAcjVuqkKRJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzkAZDOJhmoC8Hinhh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyfiR1311E7PqIM26J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyt13y3qcMLhP5Gm6Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz48ZOMgXd_uPzTEFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyYz43cuN5TRi6_PMN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwaNpbwGEXfFOnqAXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5Ge7eWsLI7MIRADV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwZW5NeKjUA4OAeTLR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]