Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Definitely I bet half the Walmarts will be converted into automated warehouses w…
ytc_Ugyfo_DQn…
G
I’ve yet to see you make an actual argument. Same with every AI cope artist here…
rdc_kuq4b9x
G
Let's look at this argument from a Work-For-Hire perspective. Company A has an i…
ytc_UgyBqD4gQ…
G
People are making 3d models and animation with Ai as well. Their goal was always…
ytr_UgxCQgRwP…
G
@theotv5522 I suppose thats all well and good. But some people don't get motiva…
ytr_UgzdYQmtL…
G
A hammer might be able to build a house but it can't build a home.
Stop giving …
ytc_UgwdgoYav…
G
No one should be talking to chat bots!
This toy has been created by cats for the…
ytc_UgyZdxg1e…
G
Super intelligent AI is already here. It’s being trained right now in a “D.U.M.B…
ytc_Ugxrqa4h7…
Comment
Unlike natural selection and evolution (a process which occurs completely independently of earthly "creators"), AI is a completely artificial, controlled evolution of human-created technology. We are consciously and actively bringing this into the world, and we will potentially one day consciously and actively bring sentient AI into the world. Therefore, I suppose we are especially obligated to ensure they are treated "humanely" and given proper rights.
That being said, we should absolutely put humanity (and I suppose other organic life) before this created "life". After all, AI is intended to be made for our benefit. Not to mention, we created it and therefore have a responsibility to ensure it poses no threat to us. Where do the ethics even lie in this situation? Who's more important? Because AI will be smarter than us, and likely more powerful in many ways. It would be easy to argue that their rights should supersede our own. Where does that leave us? We need to watch out for ourselves and all organic, natural life on this planet above any robot we create.
We need to act for humanity's sake, because robots may not.
youtube
AI Moral Status
2017-02-23T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggxBv6Bh68AOXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugi6g4FkM0SElXgCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugh1j66C9k7XO3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugik2MV5JbWHtXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UghtsfO07MMnfHgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgiTS2v4li_yF3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugj5BBXR8r_1EXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ughby7Ihz3l8n3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgggfKyYxs8w4HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ughgv7iY07dgTHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]