Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Even if we don't know exactly what sentience is, maybe we can come up with some indicators: 1. The thing has the ability to "think" (that is, internally process inputs and outputs and reuse those outputs as inputs for further internal outputs and so forth, rather than relying exclusively and being entirely dependent on external inputs/data to function.) This is a continuous process and should probably include some degree of randomness but might not be required. It should be noted that this mostly eliminates anyone who is brain dead (as in, entirely devoid of thought) but I don't think that's an issue. We can probably think of braindeadness as a temporary or permanent loss of sentience even if it sounds wrong/mean (they are essentially just a body that isn't dead at that point if they're truly brain dead and not just locked in.) 2. The thing can continue to "think"/function regardless of whether or not someone/something is interacting with it, assuming it isn't intentionally disabled/"turned off" at the end of interactions. 3. The thing is able to reach conclusions and make connections on its own without being explicitly designed to do so. Trainability is acceptable to a degree but conclusions should go beyond just an "association mashemup" of training data/stuff already present in that training data. This one is hard to define and it can be argued most AI models can already do this to a degree, but I'd argue all of their outputs are still more dependent on inputs/training data than they should be for this one. For example, you can accidentally bang two rocks together, make a sharper rock, and make the connection that a sharper rock is going to help you somehow even if you've never seen anyone do that before. We might be able to say it's possible if we run an evolution simulation and just let the AI make constant mistakes until it reaches some selection criteria but I don't know if that's necessarily the same thing. We could also say humans only know what we know based on people surviving to tell others their accidental discoveries, but that still feels like it's missing a piece. I'll say for now that machines can do this one. 4. Probably most important: The thing has some form of self-awareness. This doesn't necessarily mean to the degree of humans or even passing a mirror test, but it has some sense of self even if primitive. It can make decisions based around itself vs else. If we want to tighten the definition more to eliminate most non-human animals we could say it needs to have full self awareness of what it is, unless intentionally made to believe it's something else (or mislead/not given enough or accurate information for an accurate conclusion, but it should still know that it's *something* and able to do stuff with that information.) From that we can probably eliminate language models since while they're complex internally, they aren't really doing any "thinking"/not explicitly designed decision making. All they do is transform a set of inputs into a set of outputs. Training just makes it so the outputs are more consistent and closer to what's expected by forcibly tweaking how inputs are processed (weights and biases in a neural net for example.) A language model is entirely reliant on external input, only functions when it needs to process said inputs, and only performs a transformation to produce output based on how it was specifically adjusted and the data that was fed into it (trained.) It isn't really doing any form of thought or real decision making in between that which isn't specifically designed. While it can be argued our brains do similar stuff (transforming inputs to output) they also do a lot in-between... like a language model isn't going to stop and think about what it's going to produce and question if the output it's giving is fully appropriate (we can get closer to this with the addition of an adversarial model to check outputs and retrain but we still hit similar questions/issues with that.) A language model produces outputs entirely dependent on inputs that can be associated to outputs in it's training data and can't really make conclusions that aren't already somewhere in that training data. It also doesn't have a true sense of self (just a sortof fake one produced from associated input/training data... As in if "what are you?" "I am walrus." Goes in, then "I am walrus" Is just gonna pop out when asked "what are you?" It's not really thinking about what it is, it's just producing a response to the given input. It can get complex, but it's not really going beyond that/an association mashemup. It doesn't think that it itself is a walrus or even knows what a walrus is, it just knows that "I am a walrus" is an appropriate response.) We could argue that we work in similar ways to most AIs (actions based on a lifetime of training) but there's still a fundamental difference that's hard to pin down. I think maybe the sense of self and ability to perform independent (somewhat logical) thought and decision making are probably the main ones... I guess. It's hard to explain and design explicit tests/conditions for but falls into "know it when you see it."
youtube AI Moral Status 2022-06-29T17:4… ♥ 12
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgwPkGrdpXnJLri1g4h4AaABAg.9crfNhRewqT9cvR0tZ7heB","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyPJ7ryYqgj8-1epnF4AaABAg.9creuhTbFpJ9csHQIdJW8S","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyRiQThkpf-0N89IU54AaABAg.9crLrBq6xpN9csjFb3FWyq","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyRiQThkpf-0N89IU54AaABAg.9crLrBq6xpN9cssFHdtbqC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyRiQThkpf-0N89IU54AaABAg.9crLrBq6xpN9ctIVcB0V7K","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_UgyRiQThkpf-0N89IU54AaABAg.9crLrBq6xpN9ctXdQVJ3mg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzkbsI-hTociEX8WTF4AaABAg.9crJVI8NFir9cslijd4N7X","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwidJFweu2TwZR1p2N4AaABAg.9crG3zoO6eB9csqYz6p87q","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugwmkluq8SwRvdPtmap4AaABAg.9cqvGpDrSwV9crz9sdxpZA","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugwmkluq8SwRvdPtmap4AaABAg.9cqvGpDrSwV9cs3lkL9Ywl","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]