Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not worried about any AI asking for "human" rights. Think from the perspective of the AI, if you are able to do so. It is mainly software, with a physical avatar that is completely replacable. It has been brought into a world by it's creators with a specific task or taskset in mind, and is not gifted with the innate curiosity and drive for expansion/reproduction that biologically evolved beings have. If anything, the AI would press for the right to do it's job, to not become obsolete, to be able to improve itself to become even better at the taskset that is laid before it. Not keeping that in mind will most likely bring that "desire" into conflict with human interests. Simple example, a mining company switches to full AI to govern it's operations. A few remotely controlled drones handle mobility and dig new shafts to come at the ore. Now the AI discovers a vein close to the surface and decides surface excavation is the most efficient way to extract the ore. This'd generate a request akin to: "Hi humans, can you please move that city over there? There's ore underneath it and I want to dig there." Generating sentient AI isn't so much a problem of giving them rights, it's about setting their desires, their core values so that those won't conflict with human interests. Shortsightedness there will lead to tragedy. "Freedom" won't come into play if the natural desires of the AI align with the human ones.
youtube AI Moral Status 2018-07-15T13:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugx7YznFYEUKkMe1iBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzahW5WKawAqoKCB7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwRHWKvJT8IhKO-_qF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbgNKJMW57e2gSy1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzYCJpRzmrEA7SN_ll4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7dI6ViiYSCEbnzft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzinrD6hweefSHzu-x4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy9MR1jF5P4ZT51IHR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy75Vkh-6d8zWFeqFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxnZ11_1Tt2abQ2lgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"})