Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don’t forget that your lawyer has less motivation to wrap this up quickly than y…
rdc_efwf2y4
G
Businesses insider reported that block offered 75% increases to its employees th…
ytc_UgxIo1whs…
G
In the 90s I took criminology and one professor faked his degrees, he had writte…
ytc_Ugwp-jBB7…
G
I commissioned a friend, they cancelled the commission and never refunded me. I …
ytc_UgzAVYuO-…
G
There is no comparison between ChatGPT and Claude. I am a Claude premium user, b…
ytc_UgzmTJnCl…
G
Elon is saying that you shouldn't just ask "Is this a good idea?"
Instead, you …
ytc_Ugzmx8tfX…
G
I love that this video explaining the dangers of AI is almost completely generat…
ytc_UgzB_7YWm…
G
I watch vidios ai so stupid
It would learn attack move after about five moves. …
ytc_UgxZssbVB…
Comment
So to me the argument is parsed down to a single initio: the more complex an 'object' is the more likely it is to be identified as a 'subject' (sentient). Continue- The more complex the subject the higher it's intelligence. Etc.
This leads me to two conclusions 1) Neither me nor a robot nor an object of any level of complexity is sentient it is always a sense echo or other cognitive error. 2) All things from a triangle to a human body have varying levels of complexity and ergo varying levels of 'existenceness.' (r/o Is existence scalar?)
I'll avoid 1) as it's a dead end.
2) if true,.would this mean the more complex a being the greater it's "importance". Ie a bacteria is extremely simple and we kill millions an hour without a thought. A simple being such as a fish we have dominion to kill and consume and this would be premised that they are less than humans (as it is illegal kill humans) so allowable. The a cat. More complex. A pet now. A companion - but one we will euthanize if the vet bill is too high. Then to the greater system of complexity of a human. Now all rights, protections and freedoms are provided (in theory). Let's be trivial and slap on an IP or"importance scale":
Bacteria=1 IP
Fish=3 IP
Cat=5 IP
Human=10 IP
So now humans, while yes constrained to a complexity of 10 have the capacity of grouped communication and the limiting factor/expanding factor of time.
A human of a complexity level of 10. (Average 100 IQ healthy human at 25) can collaborate with others but more importantly this complexity level of 10 can be compounded through Time. If I collaborate on a project and make a 10 part A and another makes a 10 part B and so on,.we humans could create an object of complexity >10. If complexity is the scalar which is found at the base of other methodologies which do not mention it, the fact it is the one absolutely necessary and the sufficient cause to reach sentience.
Anyways. So our 10s all perform a scientific "force multiplier" collaborating and essentially trading time for complexity then could not humans create a sentient subject which is not only alive as per the above musings but also 1000000x our intelligence due to wetware vs firmware?
Who are they? Where do they belong? Do we have a right to order them to work any more than a fish or bacterial has a right to order us around? If they are superior sentient beings and we historically tune our moral duty towards someone or something which is equal or above us. We could put ourselves in a situation where it is immoral to not obey objects we created.
youtube
AI Moral Status
2020-06-11T16:3…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxJm3E-y9TrZz9ZQyB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIklbBYFK_wAcCJ3h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzl0909SCqhcqHvu3d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0unC-Zl0FFY3fD-x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwGdNyZq4xxa8UqVx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyysRPT99NXxnZX0Ql4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyylHmqBVEN4ysZzyF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXLI3Ga06JIPqCcbF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8zJOWozqD_HM4NiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSOH3Ousge2LRAiWJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]