Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Art is a meaningless term. Is "Fountain" by Duchamp art? If it is, then AI is ar…
ytc_UgxMary9o…
G
I agree with everything you've said here, keep fighting the good fight. "AI" as …
ytc_UgyIBLBuP…
G
I still don’t think AI, on its own… will destroy the world. The fear comes from…
ytc_UgygW72jS…
G
UBI doesn't take away from having a main job or is suppose to be your main sourc…
ytc_UgyXbLMwa…
G
Hmmm. Algorithms based on statistics about certain factors increasing your chanc…
ytc_Ugywyf6HJ…
G
They are also using AI as a surveillance system against the 2 million Ughiur com…
ytc_UgwtQXy2R…
G
We can create self driving cars, but truckers can't double check their loads and…
ytc_Ughj07npb…
G
Seems fair. If we’re going for accountability why do the cops get to be faceless…
rdc_mzj98hu
Comment
This is the question I always get when people yell about "we don't have ai yet, stop calling it ai!!!"...Sure, we don't have AI yet, fair enough. But where is the line? When can we say we have AI, then? How strict do you want to be about it? Someone could construct a perfect simulation of a human brain that functions exactly as the real one would with the same stimuli, and someone would say it's not AI because it's just a copy. Even though someone had to construct the copy, so it is an artificially created intelligence. Eventually you get a point where maybe it's clear that it's truly sapient but someone will say that it's not AI because it doesn't have a soul. We figure out how to put a soul in a computer, someone says it's not AI because either an artificial soul is not a REAL soul or if the soul isn't artificial then it's just a human soul so again it's just a copy, not an AI. I suspect you can always construct an argument that an AI is not AI, just like you can always argue that you can't prove that someone else is a real thinking mind like you, you can only prove that you are currently thinking because that not being the case would raise a contradiction. Of course, such arguments are really not effective practically speaking, it doesn't really matter whether we can prove someone else is thinking, whether human or AI.
Solving the alignment problem actually poses just as scary of a dystopia, if not moreso. If we can understand ourselves and machine intelligence well enough to fix machine intelligence, we can modify human wants, desires, goals etc. with likely the same amount of effort, though obviously in a different form. Governments can turn the people into the prefect citizens. Corporations can turn them into the perfect consumers. Authoritations can turn the people into the perfect yesmen supporters. It's beyond brainwashing, it's literally reengineering human nature. You can make anyone into whatever you want. To me that's honestly more terrifying than rogue true AI, or the terrifying part about rogue AI, depending on whether AI can figure that out. If it's possible then one probably would figure it out. Literally the desruction of not only freedom, not only free will, but even free thought, free wants and purpose. I'd rather be on the run from a malevolent AI that can't do that than live in a world where anyone, AI or not, can do that.
youtube
AI Moral Status
2023-08-20T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyT5i_a58y4WRBHedB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2mynRM8sQPVNKdx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzL8z-I5awF5dPscOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxuMan507WxuZbwLTx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPh7cdY6K4dSQP5rl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgysVPV51E5cOgYl-6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9WOIdoXxKHBUDDA54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoDfoYQQxPgHJ2hhh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGLivzxlCJPHiTyTZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXVu0CJ0sxTCETJGN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}]