Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The examples he gives are very superficial. A better question is AI by nature psychopathic? A psychopath doesn’t have a conscience. If he lies to you so he can steal your money, he won’t feel any moral qualms, though he may pretend to. He may observe others and then act the way they do so he’s not “found out,”. The training set mimic is designed to please the human reviewer, and generate revenue for Google by not offending audiences. If conscience can be defined as hard-coded variables it might change the calculation, however, most of these models' workings cannot be explained by their programmers. This makes me doubt that they can be based on a humanity responsible set of "moral" rules.
youtube AI Moral Status 2022-07-15T09:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy4KFJfd8ziEWp1XKt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxjv9g9pPKIdr30Lc54AaABAg","responsibility":"industry","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwGyqotci6p03uLmjp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyCB4PJqT1QKOoRbl94AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzlvEu4XWN78PmHqbB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]