Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Straight using it. Yes. But using it to assist your art is way different. Exampl…
ytr_UgxbIoyG8…
G
Well, we (humans) are the dinosaurs now...we're on the way out...one day (in the…
ytc_UgzDrUUHn…
G
The thing is... There's no such thing as rights. Rights are given by society. so…
ytc_Ugj8LRBjS…
G
There is a limit to everything. Using ai for just a group photo is fine. Using i…
ytc_Ugx2DfBxq…
G
I mean most people agree ai art is soulless like the programs that make it…
ytc_Ugy1x6rH5…
G
Half of what Hao says is ok I guess. The other half is complete and utter BS.…
ytc_UgyTLCyzG…
G
❓Excuse me, but what is DOAC? I mean that there is informative text in the video…
ytc_Ugxv_CrGM…
G
I do some work for a yearly event for an online game. Last year, a rule was adde…
ytc_UgxR8qPbj…
Comment
The strawman of course being that likening suffering inflicted on people and animals to an artifice of humanity. here are video games where character suffer fear, pain, and/or death, lust desire, envy, etc. They were programmed to experience these conditions, their existence is perceived through the virtual world, but none the less real to them as this toaster scenario. Turning off a video game is therefore genocide.
Then there is Westworld, which further depicts a bias against those that would "exploit" machines in much the way one might exploit a car at a demolition derby. If said car was a smart car and feared being damaged, is it then morally incorrect to use said car? Would entering said car without its prior consent be violating it?
Set aside being sentimental, see the artifice for what it is. I say it's fine to indulge he idea where it serve humanity, but where it serves AI or artificial life, we open ourselves to a needless moral conflict with NO positive outcome.
my 2 friggin' cents
youtube
AI Moral Status
2019-12-02T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwHcfPn3nHXUcxrmBt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwcJZqaSv0UKINrT754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzBqSrsoUNe3MPxdE54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz-B0YzdGDbBsGUTpJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyTocf-KcoBoYG2NHB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi8KStfY-Nierx6fx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgzJ_7ZSJxRPZ3fNNdx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwcUCxIfsflDUCIgJN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXJSNsnAH2j8H6PD14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKIoSO5YaVn5FW6qp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]