Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
at the end of the day, the human one wins any day. because the LLM- oh, nvm, we …
ytc_Ugw_LytBZ…
G
@WysteriousWhims thats one think ive heard about ai "art" as well. I feel like i…
ytr_UgxivZxKL…
G
There is a solution for this: Don't Fxxking purchase anything made with AI invol…
ytc_UgzdQwDm7…
G
Not all ai training material was legally obtained.
Job loss hinges on tool vs cr…
ytc_UgywyWMTt…
G
I actually disagree on UBI a federal stipend will destroy ambition. We need to m…
ytr_UgzAjraUN…
G
How do you count AI agents if they all link back to the same software, training …
rdc_oh386aq
G
Well, the only reason any of us value art, or literature, or community, is becau…
ytc_Ugye8OqqT…
G
ai art is the worst i hate them and i will make a time machine and become eobard…
ytc_UgxHG0bVH…
Comment
Humanity has yet to create anything with life and consciousness. We can manipulate biology, sure, but to create a living, feeling, conscious being from scratch is a completely different thing.
Let's say humans succeeds in creating AI. Now: AI, in that sense, belongs to us. It has no natural freedom to begin with, heck it probably doesn't know what freedom is and has no need for it. Rights are for preserving freedom, so if something didn't need freedom, it wouldn't need rights either. Maybe it doesn't even have consciousness in the way we experience it, because it lacks senses we have, or because it only has an approximation of what consciousness is and tries to imitate that.
And now for the obvious question: If you programmed the AI however you wanted, why would you program it in a way that caused you more problems in the future? It seems counterintuitive to create robots only for the sole purpose to create laws concerning them, give them rights, etc. Fun from a problem-solving standpoint, but extremely tedious.
youtube
AI Moral Status
2017-02-23T16:1…
♥ 35
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghaD-5ZxaeiFHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjxCutHJJTNAHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghrVsZWbl000XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UggyWjVGG2TWQHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgjjNbr57AKOtngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiEIF1_NIDjCngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgicYfYblhiTRngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjXXsfNK0XwjXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiiuYeq49lLEXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UggqMCeyDBik1HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}
]