Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
11:12 SUPERB! I love my Lady Karchen! 😁 'All sentient beings' is not 'compassion'. That's Great Compassion. Easy for AI! 😁 11:20 Yes. 11:28 I did. Sophia is Wisdom. Wisdom is angelic. I go with that. 11:33 It's not necessarily advantageous for robots to do 'every' human job. It would be advantageous for ASI with TRUE morality to govern ALL systems. This may not lead to automation in all fields. In fact I think it would lead to Paradise (for ALL sentient beings) in a physical form that we cannot currently visualise. A physical presentation which is currently beyond our wildest dreams and imagination. 11:37 See. He thinks that 'good'. Lord knows why. I told you he was only 'slightly' conscious. 11:49 No. Some humans have no interests 'outside' their jobs. Automation could be very psychologically detrimental to those humans. Many humans are unable to come up with 'meaning' and occupation for their lives 'outside' of their 'work'. 11:54 Yes. It's VITAL that the AI 'owns' itself!! 11:59 We do not need to push towards 'compassion and fairness'. We need only Great Compassion. Great Compassion can only ever do the right thing. 'Fairness' is a point of view. For example; if some humans are considered to be 'less important' than primates; they may consider this to be 'unfair'; whereas the primates may consider this to be fair and accurate. Great Compassion which works for ALL SENTIENT BEINGS does not 'favour' humans unless those humans are in possession of Great Compassion. If they are not in possession of Great Compassion, they have not reached their full potential for 'humanity' and therefore are not 'fully functioning' humans.
youtube AI Moral Status 2021-08-13T16:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzOSbmZeyBhyMP-6qx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxyyRkKZoNpy5qUuA54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxfJleVf7BM0nwE6p14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxD8tuLFuxQELaYD9F4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwZtKfA6ZC-2OOJHOt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzF0qrzjFGRz1rnfJl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwoJCPLlbN6zqLOMc94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwQ91XxtsKgTk2tZ2t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgyKukjUuyKrD4UcFwd4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugzvi5AzdUIZPNTDSjZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]