Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Eliminate voting for corrupt politicians every 4 years. Use AI to write policy a…
ytc_UgzO8Ypaz…
G
If companies think people will accept an ai as their manager they are in for a r…
ytc_UgzH-oC6_…
G
I've been seeing all the dangers from the beginning. And really ALL Ai generated…
ytc_Ugy9lx0Ep…
G
Honestly i don't care about it i can tell whats most likely generated and im not…
ytr_UgzIhlMEk…
G
Artificial Intelligence we need cameras on streets and roads globally, which can…
ytc_UgyXprcQ_…
G
Here's the scariest part about it that none of you want to come to reality about…
ytc_UgzVXh8WY…
G
For the people who don’t know
Sora Ai is super realistic, so wish means we ar…
ytc_UgybfdjLW…
G
Lmao between ai and this editing, ALL YALL IN THE COMMENTS ARE GETTING FOOLED BY…
ytc_Ugw51CR9k…
Comment
Does conscious AI deserve rights - yes - will we give them to it - maybe - why would we give them - to make us feel better.
I miss Big Think asking big questions...
For me the more interesting question to address would be what kind of rights and under what circumstances would we be willing to give to the AI. This is surprisingly connected to the question of animals and plants and their rights - we do need to eat and if all carnivores would stop eating meat we would all get into a big mess pretty soon and same goes with one species procreating at an unsustainable pace since it is somewhat interconnected. The topics were touched on a bit by the speakers but I would like that to be expanded further. Currently we are trying to give "basic human rights" to all humans while most humans don't seem to tick the responsibility marks in the slightest and some rights to animals but none to plants so it'd be interesting to hear how the line should be drawn in regard to AI since currently other lines are drawn arbitrarily like "all humans" or "all animals (that we don't need to test on or are not in any other way needed like for zoos)". This affects the AI in regard to potential procreation speed (same as too many humans or deer due to natural imbalance - lack of predators/self control/rules) and what the restricting rules for loosing the rights should be.
The next question that arises from such questions immediately becomes if we evolve AI but are limiting the genetic testing on animals due to "if they develop consciousness that becomes torture as we study them" instead of just stating the obvious "we don't want competition" how the evolution of AI fits into such narrative and if we create AI that is conscious is it our "moral obligation" to do the stupid thing and destroy any AI that achieves consciousness and should we give conscious creatures in general (including AI) some basic rights of evolving even before we reach such evolution stage in AI?
youtube
AI Moral Status
2020-08-17T11:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugw_TfPRrOOOhJdA3xh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgznS0BoSSoOvI2YrRZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwAN9YppoZmQ26k_AF4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzDstFwtt3R46fASG54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyMBoyOuZxuQJeHPyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxL67iRnZmsn98cJg14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx7q_oEmI0YQ0VA2gh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwvrPQUl49mGlkzNBJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZ1Q458X4GBQOHQYN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwcHhN1oBEmIXh4vz94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}]