Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You really aren’t a grown adult male if you feel this threatened by a 16 year ol…
rdc_faoutax
G
My gemini is saying after you come back after 42 days should i create a welco…
ytc_Ugw6JH1wC…
G
I feel bad for today's teenager.. They will surely use AI (chatgpt) to write eas…
ytc_UgwXi4BpA…
G
Dude, I can't believe anyone willingly uses ring cameras, let alone pays for the…
ytc_Ugwug6ibH…
G
FUNNY HOW BIG TECH ARE LOOKING FOR SAFE PLACES TO PUT DATA CENTERS FOR AI WHILE …
ytc_UgxlY7iRn…
G
Umm I did not torture a ai baby and abuse them. Even tho I love baby’s in irl…
ytc_UgyEUkdT4…
G
It was not AI that drove the teen's suicide, it's the home schooling or online s…
ytc_UgwxapOht…
G
Imagine with Neuralink and VR you could conect with a computer /AI and talk to …
ytc_UgyiDHryQ…
Comment
There is a flaw in the argument that the ability to feel pain or pleasure is necessary to have rights. You wouldn't argue that a human with brain damage, for example, such that they feel little or no pleasure and experiences little or no pain wouldn't have any rights and could be cut up for parts to help other people. Just because someone is lacking emotion doesn't waive their rights, it just means that if you infringed on their rights they wouldn't personally complain about it. But society typically requires governments to protect the rights of people who are otherwise mentally or physically incapable of protecting themselves, so just because an emotionless person might not care if they are kept in slavery the law would normally step in to stop the situation anyway.
So just because artificial intelligences might be emotionless would not preclude them from being eligible to be guaranteed "human" rights.They might not demand or care if they have the rights but that would only mean that human advocates would likely step in to guard those rights on their behalf.
youtube
AI Moral Status
2017-02-24T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugg7JvT5Ke9_Y3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugjp4atLRhJUd3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggjRqdxE5U2-ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjfKgT77yIRgXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg4TuIQPSKXyngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjbVdE7EsFa9XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UghsMX_rPl0ZH3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggUQCGmIZf1bXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjYaewyXWwmjngCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiW2xFap75PT3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]