Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love that this video explaining the dangers of AI is almost completely generat…
ytc_UgzB_7YWm…
G
That's the perfect voice for ChatGPT. It's some front-desk bitch with blue hair …
ytc_UgxvVjxe0…
G
I would say God's work, but this is to go..... Even further beyond! Before Athei…
ytc_UgwMHEBus…
G
Tbf builder ai weren't lying, you just misunderstood that the 'I' in AI actually…
ytc_UgwaQDnvM…
G
@michaelwinkler7841 AI art is just a program that assambles preexisting images t…
ytr_UgwP4eJVV…
G
The only hope I can find in our economic future comes after the job bloodbath wh…
ytc_UgwkAhF4V…
G
Communist China’s AI strategy is scary. They are aiming at eliminating democracy…
ytc_UgwrJgIx7…
G
So it isn’t the 1% of billionaires, asset inflation, wage stagnation, tax loopho…
ytc_Ugy1Fvg8-…
Comment
This is pretty simple to me.
If we're going to make concious, self aware robots then we have to give them rights - as far as I'm concerned the two are married together. It's irresposible and cruel to give conciousness without rights.
If you're going to make a "being" intelligent, and have thoughts, feelings etc then you can't NOT give them the protections too, because that's basically all "rights" are, protections for self-aware, thinking, feeling beings - human or not is just a technicality imo.
Overall though I think the whole AI thing is a bad idea, there's no real good reason for it (you can create machines to do almost any job with Intelligence short of AI, VI i think it's called?) The only reason for creating AI is because we can, and that's not a good reason, especially in this case given the potential consequences.
You don't have to be particularely intelligent to work out that creating something that is physically stronger, physically more resistant, more efficient and portentially magnitudes more intelligent down the line is a bad idea.
youtube
AI Moral Status
2017-02-23T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggCabrbbmQ0r3gCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UggWuRG3I2xoMHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi0YkzZoj5uFHgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UggdjWFf4dAjwHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgipsfTjRlE4IXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggLPC21ROMH8ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggLshsEzXkadHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UghS5aqRWS1YjHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggB1q9tG8ii23gCoAEC","responsibility":"none","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_Uggpcwm-kQQjQHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]