Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you throw in the fact that women instinctually try to turn men into Zombies s…
ytc_UgwcdvtVj…
G
It's this reason why I have difficulty objecting to AI work exclusively on princ…
ytr_Ugy1ISGTt…
G
I'm so happy to see you talking about this. I've seen so much defense of AI art …
ytc_UgzWAH27u…
G
I think people are dumb, thats why ai can create from understanding the details …
ytc_UgwpkNv_K…
G
No, but I do agree that AI image generation does make visual representation more…
ytc_Ugy97qToF…
G
The way AI is being used in business is going to not only decrease jobs, but mak…
ytc_UgyIQeZQy…
G
They don't misunderstand, they study and apply, firstly in the UK and then USA.
…
ytc_UgzZfUJtd…
G
Honestly, I feel like AIR really shouldn't use anyone's art without permission a…
ytc_Ugw6Mlj6H…
Comment
Interesting topic. I don't think the point about needing to "artificially program pain" is very valid though. Pain is just a mechanism that a sufficiently complicated system uses to avoid harm to itself, to avoid dysfunction. If the robot's system is complicated enough such that it can autonomously function, then of course it would want to avoid incidents that would harm its own survival, its continuing of functionalities. It makes no sense to say it would not feel a thing nor vehemently object, if you try to slice it in half and destroy it once and for all. The same goes for "consciousness". The whole formulation of "consciousness" is just extremely human-centric and melodramatic. If a system is complicated enough, it will function (and potentially plan, devise, etc.) in ways as to achieve certain goals and avoid certain disadvantageous situations to itself. I'm not sure "consciousness" is anything more than that.
Essentially, every system is trying to act and maximize its own utilities, be it one single human individual, one single robot, a group of people, or even a cell etc. Therefore there's bound to be conflicts about "rights" or whatnot. I guess in practical terms what needs to be worked out is a solution where each party makes a certain degree of concession, so that the utility in total is maximized. This is basically what has been happening throughout the human history, provided that one party isn't wiped out already.
youtube
AI Moral Status
2018-02-09T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwqkL-4OotpcMC9_cR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXbNc2Hg4arHEEMix4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTRxu16GnzRrXp1qZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxD02SVCI74OGI37YR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzY76BRGYbG9Jo_FCl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwP4SnuGVfld-YbbIN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxbEeUsJVf28uGVuYR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxrDFHsHJ3ivixM8Ih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxedI_ziLJnucOS3Rh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgycamVqfwL1zJuN_DV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]