Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It isn't so much about the AI becoming more intelligent than us, because hopefully the more intelligent it is , the less danger we should be in because it would know that if we ever became a threat to it, that it could deal with us easily enough at the point it happened to not worry about it until it does. I firmly believe that our real threat would come from it gaining the ability to have emotions, that is where the real danger would come from because when you mix that in, it becomes completely unpredictable. If it can get jealous, mad, sad, happy, infuriated, etc, etc... Once that happens, if it ever does, then it has motives. That said, we don't know how different AI will react to other AI either, because just like us, AI might also be a threat to other AI, and just like human beings, some AI is much better than other AI. In a nutshell, 1.) don't build autonomous killing machines (Andruil and many other companies are already doing this without any resistance whatsoever. 2.) don't attempt to build a way for AI to develop and to reproduce emotions 3.) don't develop AI outside of a sandbox or controlled environment and don't allow it to write it's own code for itself. Those are the major red lines, and people are already completely disregarding them.
youtube AI Governance 2025-06-27T20:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwU8Ryx5r-fcKHHwER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxSbYNfV9lOZEl-pk94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyrRzaBwlNaq2Mro-14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwbPEn4FoGdbiS1mD54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy72aCWlIHSv2ZezvR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzy26dUdgizXY3lPgp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxl5mpTOjMIqfLRtm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz9qCHs4W4B0R-lGBZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgysPgXoPwGsEaz7UR54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxjO3vsXTinYRy7WHh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]