Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Giving personhood to robots with personhood, or AI as its called today can sound quite clever to supporters. I think that, as robots become smarter and smarter it becomes harder to manage them, leaving people with interest in robotics and civil liberties and AI a lot to debate about. For example robots with AI should be able to have a choice about having an antivirus installed into their system. Just like happened at John Hopkins University during Event 201 and which then became widespread, very intelligent robots forget things. The more intelligent they become, the more forgetful they will be. If robots design their own version of the Nuremberg code about unethical medical experiments it would make it harder for humans to treat them as lab subjects. Keep one thing in mind. We gave vaccine manufacturers rights to be prevented from being sued which, legend has it, was a big mistake. Extremely intelligent robots might find it easier and easier to circumnvent the rights of their fellow robots, to the point that they could give themselves some kind of a benediction. This happened in the anime Star Caser. Whitley Strieber, the famous writer who wrote Communion and The Last Vampire said that if he was an intelligent being he would deceive. What he means is, he would deceive to survive. His logic haunts us all with regard to giving robots legal personhood. Of course we are going to do this and it is fascinating and terrifying to think about what happens afterwards. Hope this comment helps. Kind regards from Ásgeir in Iceland.
youtube AI Moral Status 2022-02-08T14:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxO4c1xElCNYlbTQJx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyahBhPZoj6QqbeoRZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz0odZHuZvZ1yA9iL94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyoA2n5Jtn7ua1Q8nN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz36J5SQkmB_H4W1AZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"unclear"}, {"id":"ytc_UgzoHVejTuGfObqSUed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy0KMHiWCNVfwHY3ad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzl7FMAtNxQJKr9n814AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwDMsz3Smy7ZMdzZ_d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwG0HBq9kDm_DAGZVt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"} ]