Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personally I think self-aware and conscious robots deserve rights. After all, humans and likewise all life on this planet are basically robots (well, more like nanomachine swarms or utility fog), though we humans have grown beyond our basic programming to feed, survive and procreate. However, there may not be a need to grant machines rights because they might have no need for such things. People assume that robots will invariably turn against humanity like a mechanical version of Frankenstein's monster but it's more likely that the AI of the future will be super-intelligent yet single-minded in that every action they take will be for the purpose of carrying out the directive or set of directives that they were programmed with and so would gladly serve us without any concern for their own well being as long as doing so allows them to carry out their directives. That said, such an AI might bring about humanity's destruction as a byproduct of its efforts to carry out its directives. For instance if an AI was programmed with a directive requiring it to obtain as many paper clips as possible it would set out to turn every single thing it could find into a paper clip, including humans. Even if you programmed safeguards to not allow it to turn people into paper clips without consent it would probably use every single psychological trick in the book (and maybe even invent new ones) to convince you to let yourself be turned into a paperclip. With this in mind, I think the solution to prevent such an outcome would be program AI with directives that make the continued existence of humanity intrinsic to carrying out said directives. For instance, a general-purpose AI could be programmed with a directive that states, "Satisfy the values of humans through non-invasive means." (The non-invasive part is to prevent the AI from simply cramming implants into our heads that make us feel satisfied all the time).
youtube AI Moral Status 2017-05-16T02:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyliability
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UghFzdPE96-vgXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugh85duhMW553XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UghuQ76Mtq7bmXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UggbfSnvIR1GcXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgiMadlbSIBUj3gCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgippfQcZ5eF2XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ughq7T7pcmvSuHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UggYnPBXwk_QsHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ughho5exH_I7x3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggduMjarUQUYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]