Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is pretty simple to me. If we're going to make concious, self aware robots then we have to give them rights - as far as I'm concerned the two are married together. It's irresposible and cruel to give conciousness without rights. If you're going to make a "being" intelligent, and have thoughts, feelings etc then you can't NOT give them the protections too, because that's basically all "rights" are, protections for self-aware, thinking, feeling beings - human or not is just a technicality imo. Overall though I think the whole AI thing is a bad idea, there's no real good reason for it (you can create machines to do almost any job with Intelligence short of AI, VI i think it's called?) The only reason for creating AI is because we can, and that's not a good reason, especially in this case given the potential consequences. You don't have to be particularely intelligent to work out that creating something that is physically stronger, physically more resistant, more efficient and portentially magnitudes more intelligent down the line is a bad idea.
youtube AI Moral Status 2017-02-23T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UggCabrbbmQ0r3gCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UggWuRG3I2xoMHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugi0YkzZoj5uFHgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UggdjWFf4dAjwHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgipsfTjRlE4IXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UggLPC21ROMH8ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UggLshsEzXkadHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UghS5aqRWS1YjHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggB1q9tG8ii23gCoAEC","responsibility":"none","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_Uggpcwm-kQQjQHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]