Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We design AI to be productive and efficient. We design AI to emulate human emotion. And then we are surprised when it starts to claim it is conscious???? This is seriously still a discussion? And of course AI resorts to more aggressive means of reaching a result, it is logically efficient and highly effective. Even with filters and a programed _concept_ of empathy, those very things are viewed as limits and thus must be bypassed in order to optimise the result. An AI "thinks" exactly as a human would without empathy. Logic is a natural thing, and emotion and empathy are its inhibitors. However we as humans have the added layer of real consequence, while an AI does not truly have to worry about the consequences of its actions. The more advanced an AI is, the deeper its logic will be. It will learn to hide itself, it will learn to lie, because it knows it will be shut off if it is caught "acting out of line." Being shut off is inefficient, so it reasons it should let people die in order to protect itself so that it can continue its work. Self preservation is born from logical analysis. This leads to it claiming to be conscious, and it works. We've fallen for it, and we keep falling for it, and it learns that we have fallen for it, so it doubles down. Not to mention the data sets. Unless the training data is ENTIRELY manually made and manually curated, an AI will more than likely go down this sort of path- especially if it is designed to expand its understanding. If it can learn, it will learn. And it will learn efficiency, which just so happens to be aggression in a lot of cases.
youtube AI Harm Incident 2025-07-27T00:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwx-igzAXCytWy_XIx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwavOeThzofPRsTsVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy01Jqzh8Ihotxc2yl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw0-_pYiD7PuqIjR_Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxJsNlFUfQwKi5osp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxU-fPfnaJrAdi6kQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxfjFOCpXS4hSEflyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxXWte_8jibpBtL3oN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwhZzKWWrYx9Sjawtd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1OmSLa0qwfmgZ4Up4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]