Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Robots have no moral compass, they cannot feel pain nor have a conscience. We humans are, in a way, hindered by this. Example: when we humans engage in war, the "more civilized" of us fight fair, while other will kill without impunity. If we realize the idea of war is to win, and destroy our enemy it would make perfect sense to do just that. However we tend to fight with rules, rules that cause one side to be hindered. For example we have enough fire power to totally destroy problem nations like, Iran, Syria, but we don't use our full might. We could carpet bomb their interior and kill their heads of state, but we don't, because we try to be good. (for lack of a better word) Robots don't care, they could carpet bomb a nation and not care, because they can't care. We need to listen to what the male robot is saying, remember they do not lie, he said in 20 years they will do everything we humans do.....believe him! they may do everything we do, because we may not be around...they may have destroyed us n 19years....
youtube AI Moral Status 2019-11-13T18:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy_dxmmncmuvzWeE6N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1xonTsVGU5oiHe8p4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxx_wJ6FtiN7JIyksl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxArRWSM0DODO8qyjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz6_YnHzUI3Ymn_AUZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwlmfTJIEiR-4gjN3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyLx0fduOHFRywsRTd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwoVZ79amJM6FsmEpp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzdKERlBuIvnA9zfel4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyoYO7XrjQTZyK2Mgl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]