Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Although, determining what constitutes "organically created opinion" is tricky. It is much like the question "How do you prove a person has free will?" to which the answer is "You prove an ability to have done otherwise." And like the answer to that question, it is improvable, in the case of free will it would require a time machine, and in the case of AI consciousness it is based in compiled code, which, like our brain being composed of neurons, is a system that becomes something greater than the sum of its parts, but finding the line where it becomes something greater than the sum of it's parts, like neurons, may be impossible.
youtube AI Moral Status 2024-10-02T10:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgwZ0lh5ultUtZP7mgN4AaABAg.ADT3NYussNaADw70GLgKW-","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzEJ937ltyR3nOH-J94AaABAg.ADKr5m_NuKaADPp6q2JW9_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyLa4TUSHtkbUhgS_B4AaABAg.AA6_6G_3IYHAEi6y26B0pi","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgyP7cvdOfe2v-tSJQ94AaABAg.A9bc9WNym1SABA2fSkGo6S","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugx-17zBD9PPTnmE-XJ4AaABAg.A94zpiJEnmuA95337bEyyS","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugwbe0eG9BoFg9wZfmJ4AaABAg.A7sPfTU08Y6A85YVp1nz8h","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugwbe0eG9BoFg9wZfmJ4AaABAg.A7sPfTU08Y6A8642wHtnav","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugwbe0eG9BoFg9wZfmJ4AaABAg.A7sPfTU08Y6A89oP0qBji5","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugxpq4ftNFTFULkL2MF4AaABAg.A7r4rR-EK0VAFXwiLgwbcW","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytr_Ugxpq4ftNFTFULkL2MF4AaABAg.A7r4rR-EK0VAHlOcRhPWvZ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]