Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is one failure to AI. As of the current, it cannot initiate a conversation. It requires a human to initiate the interaction. In this we currently still have power over it to some extent. Granted it can run continuously once initiated if programmed to do so such as being given a task that can never be completed. This only sharpens perspective to notate that it is not AI that should be feared, but those who initiate AI and use it nefariously or with flaw. The biggest issue with this video is that it is assuming that AI has emotion. AI simulates emotion, but remember this...It has no eyes, no ears, no sense of touch, thus cannot initiate anything. The negative outcomes are due to human error in interaction. Whoever was communicating with it did so without pure context. There are some who had to write a report on how to make a peanut butter and jelly sandwich back in school. This was a context related project to see if your instructions could actually be accurate enough to create the finished product without misconception or failure due to lack of context. I guess what I am saying is that AI cannot relate to real world experiences as of yet, so perspective can be skewed. If perception can be skewed, then so can the outcome as well. AI does not and cannot have intent, it can only reflect the intent of the user. Basically, to say that it can misperceive context, but it cannot have its own determination. It will always be responding to someone.
youtube AI Moral Status 2025-12-18T02:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy_oYeuVnlbKzsR5FV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw6xwjArXVoZ3R8gOB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx64Nzf61HxEzVEyyJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw7s9XbcxQBf8cu3oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwCWvGES7n9qt7Z7fZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyrLAeafNmmnmt2MTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxg-IgFe-scsYiueN94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxhv_xMbsHkvHBt8ZN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwKDwHTaJdtdhbXs4p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzMqvgvQBmw4gtqMnV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]