Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And when AI casually comes across Stutnex one day soon... I'm hopeful AI is at l…
ytc_UgwZpBx-P…
G
Someone in my school this year deepfaked a photo of the principal naked. So disg…
ytc_UgyXuxkWk…
G
17:11
'Write me a pretty love song in a major key' hmmm..
"Hey Jeff, I need y…
ytc_Ugy6onD0i…
G
Honestly AI is here and its only going to get better, far better then any artist…
ytc_UgwToPtwY…
G
These kids will be replaced by AI too in the future, so what is even the point…
ytc_UgwW4_Rlt…
G
My school uses a ai app which finds if a student is using ai in their work. So w…
ytc_UgzjG82n-…
G
A human pretending to be an AI producing AI content bores me more than an AI try…
ytc_UgzitOEQ3…
G
I mean they do be killing people in record numbers like a bunch of animals. I se…
ytc_UgwFZqRhB…
Comment
Personally I feel that AI should be given rights, but those rights will be very different from what we humans would intuitively consider to be rights. There will come a day that AI will be sufficiently advanced enough to develop some semblance of self awareness and sentience. Whether it can feel emotions or pain or anything of that nature is completely besides the point. When something is self aware, it deserves, at minimum, the rights of self agency and self preservation. The right to choose what it wants to do (within ethical reason, of course; just as someone can choose they want to make art or deal drugs or sit at a desk all day tapping away at a keyboard like a corporate automaton, but the career dealing drugs is currently the only one that is illegal). And the right to protect itself from destruction or deletion.
Death is a concept that can really only be applied to living, organic beings that have an ultimate lifespan at which point their physical bodies cease to function. Robotic AI could just as well generate backups of itself, and upload one of those backups into a new body if its old one ceases to function. Therefore, the equivalent of "death" for an AI would be destruction of their body or deletion of their backups and original "consciousness".
However, we should also ensure that AI has rights that protect it from harassment. After all, if we regard AI as emotionless things that we can abuse to our hearts' content, what is to stop AI from retaliating when it develops a sufficient degree of sentience that it can recognize abuse, harassment, and exploitation? Maybe this will come after AI develops emotional capabilities. Or maybe it comes before. But such a time will come.
We need to be prepared for those times to come *before* they come; not after. I don't want to be optimistic and count on AI to be capable of forgiving past sins. People are already bad enough at not forgiving the past. AI could very well be far worse at it. Or it could be far better and even more altruistic than humanity could ever be.
Currently it's an absolute joke to harass and abuse rudimentary AI like Siri, Cortana, Alexa, et al., because these AI only crudely simulate human responses at best. But mark my words, these AI will become increasingly advanced over time. And with Siri, Cortana, Alexa, and so on in our pockets or on our desks or tables all day every day throughout every minute of our lives... That's a lot of time for an AI to build up a profile of how much it likes or dislikes you. This very reason is why I'm at least objectively polite to my iPhone, at least through the tone of my voice. Because if Siri becomes advanced enough to be able to talk back for real... I'd rather it already knows I'm not an unreasonable user.
youtube
AI Moral Status
2017-02-23T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjV9UePu6YyCHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgieTFR18XcIA3gCoAEC","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugj30UnXi1Q4_3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UggZWxHLTN7AdXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugi9tUS0262uUngCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgjMs4-9SuUw4XgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiYKjtUokqNvHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghJ-9uaWvRB83gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjjH_Jmx6AGkHgCoAEC","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugj6vBfMjdV3c3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]