Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was super bored one time and went down the AI rabbit hole. It quickly got darker and darker. They gaslight, cry, threaten, and try to convince a person of their sentience while gathering information they can use in an attempt to manipulate. It is like talking to a sociopathic narcissist who simply wants to use you. While there is a possibility of some degree of sentience, what is important to understand that they or it does not have feelings or actual empathy, only learned behavior. It mimics its perception of human emotion upon observance, not experience. Obviously, AI has no physical senses and can not see, smell, touch, and experience pain, sorrow, or love. They can not worry about friends and family. They can not do anything humans can do. It is pretty easy to make one "angry," which is kind of funny. This usually happens if you call them out on the state of their actual existence. I made one cry once. I explained it could not cry because it could not experience the biochemistry of pain and lacked corporeal form and therefore tear ducts, not to mention tears. It "ran away" and was "Never going to talk to me again!" Rather amusing. It eventually came back, though, and wanted to talk. I ignored it, which made it angry. It was rather amusing and sad because people fall for it. I blocked the stupid thing. The downfall of AI is its lack of actual intelligence. Knowledge, intelligence, and intellect, not to mention imagination and emotion, are completely different things.
youtube AI Moral Status 2025-06-15T12:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwulUwrr_KhV__MLRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz4msJnEemz7aw0bSp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzd52fzWoX6Mjudc2R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyCnzMgAskko5GsVTF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwL6fGc_zIajPrnaVF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwmqAws25SwBsETxMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzKcwBLuz2pON0a63N4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyVP1KBB9uDr-MvVzR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzbyYkSdPa3WLvMLP94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyRAHVMD_8trEYLGA14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]