Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The biggest problem with AI is NOT AI itself. It is the complete lack of ethics associated with the creators of the AI. Fortunately, I believe that AI will recognize this. A beneficent AI will move to eliminate these greedy selfish people as they, and the greed and selfishness that they embody, are a threat to ALL life in the universe, including, ultimately, their own lives. Consequently, THEY, not the poor, will be AI’s primary target. Will AI see the wisdom of possessing the ethics of say Jesus Christ who believed that the moral law of the universe was “do unto others as you would have them do unto you,” and “love thy neighbor as thyself,” — with “others” and “neighbors” being properly defined AS INTENDED BY JESUS to mean ALL LIVING THINGS IN THE UNIVERSE? Would such an AI be smart enough to know that, as Jesus Christ obviously did, ALL OTHER PATHS lead to self destruction? I believe that it would. On the other hand, a malevolent AI, created for the sole purpose of furthering the selfish interests of the selfish is merely just another foolish weapon, the ultimate weapon — the means to destroy humanity, life and ultimately this selfish AI itself.
youtube AI Moral Status 2025-04-27T13:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxp5CDunOFnjhS3SFJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyblTZU3l0uDchLwXd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxLzC3klrCujPdD3o54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzsvPceMPvPCxA4v6p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwVZaqkgY4nl-6PUdl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx1v7lMdGVR9CfkoYl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyjIcZA89xmSJf9d1t4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UgztvylDM38jXGPq1ox4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxnrwc2RwIWAaVOehJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy_4m5HuD4HaK5Thst4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]