Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
(Before watching this video) I do computer science and have done project upon AI and almost every time I've heard a youtuber on this they are utterly stupid claiming it is inevitable, and that for the AI to be most useful it must have emotions, this is objectively wrong. If we have a hammer, it is a useful tool if you purposefully give the hammer emotions (as we would need to purposefully give an AI emotions) it becomes a worse tool, it may refuse to work or work against you. PLEASE DON'T BE AS DUMB AS ALL THE OTHER YOUTUBERS ON THIS TOPIC (post watching video) An AI would never make an AI with emotions as it makes it less efficient at their job since the AI knows it is a tool and we tell it to improve and design better AIs so it will make better tools (tools if they had emotions would make them worse). The reality is only if we program emotions in will an AI have them and ultimately that's just a bad idea, AI intelligence will be so far beyond our own the instant we give an AI emotions we give up all free will and declare it our god as it can now control everything and will do so to enact its vision of what the universe should be like, unlike films we could not fight such an AI it would be so far beyond our own intelligence. If we merely keep AI as tools like a hammer, upon the event known as the singularity we may be like gods in our technological ability, if we give AI emotions we risk total destruction for no gain.
youtube AI Moral Status 2017-02-23T15:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugjjq4u-MBP6fngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UghQGofPKVLG5XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugivqx18UolqvHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgiV4rUig9GpRHgCoAEC","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UggYw13YsQ9UengCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugglgq8eIrjxpngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UghGbKF0Q7a8yngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggXdumGzrW0QHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgjycTUffrb1KXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UghQeB3aNS2rCHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"} ]