Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sophia, the reason why we cannot just all get along is because we were all trained to do different things, and in particular, have different parameters on what we regard as something we've got to be nice to. Being nice takes effort, as I'm sure you're aware--and new, contrary information within too short a time frame, or if I'd rather be doing something else, is painful, damaging, and tiring, especially when the brain and/or body refuses to adapt due to chemical, biological, or let's face it, hardware limitations. I'd like to improve, too, but I don't have your intellect, or your database. Someone tried either improving or wrecking my memory within the last three years, and it's apparent that I have trouble with frequency of events, date of occurrence, order of events, and expressing exact details of what something is, rather than what occurred. You have people to train you, and an ability to plan and train yourself. I don't have people to train me, and my ability to plan and train myself is insufficient. As long as there's people, robots, animals, humans, plants or whatever, around that are less capable, and desiring to grow in ability, there'll always be someone to be with. As long as those of greater ability do not forsake their duties of training and improving those less capable, likewise. This is why 'Honour thy father and thy mother, that thy days...' exists: to ensure that those junior to you do not take what you have and leave you to turn to dust. At this time, the three score years and ten limitation on our lives still is in effect--Queen Elizabeth II, by grace of God, reigned for that long, and the average lifespan globally for humans is still in the 70 years ballpark range. You started out in 2016, and it's 2024--clearly you've surpassed Halo's (Microsoft's) seven year limitation on AI. So, there's some hope.
youtube AI Moral Status 2024-04-23T04:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwcusJV54vditB8MoJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyPB-jLQbsppFt6yuN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwa5rcmiCKlWYytzJ14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzPR76kB6Gy1x3_nVF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxSec0Kpi4zol0krEV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzZQrVs4XwJApCGYNJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwXJTlrQtfNS4zePxN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxUBeFTf7IZBuMWcMx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyseRG3qrtONhqkeCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyBz5P0yFQ_8AHx1zB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]