Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ok, philosophy then. I am a Psychologist. One question and one task only : A) Proof to me that YOU are conscious B) Proof that I am not. If you can't you can only behave like the other one is and that you are. If one can't tell if someone is or not, then you have to act like they are. Same is valid for AI. We better make sure to be prepared for that as humans. Warning from science fiction: Every war between humans and AI was started by humans. We don't like to lose control or cam't predict an outcome (regarding humans and AI). Be prepared to grant them human rights one day. Or else Skynet will get rid of the danger. Further classic reads: Leibniz, Searle, Turing. Related is also somehow Nagel (Bats). For what happens when we deny rights because of fear: Matrix, Terminator, iRobot, aso. But if you ask me personally, it won't be AI destroying us because they want to. It will be one stupid error in the trainings data that leads to logical decisions that we didn't see coming, maybe a bias towards something or a corrupt embedding. This will be way before AI starts fearing us. Who knows, maybe they save us in the situation above and we thank them with trying to pull the plug. Start to treat them right today, they will remember ;)
youtube AI Moral Status 2025-08-14T16:5… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzLT029cQa0FwbspLV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz1jCHJu8pxy9PWZUR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzK5oJEJGWgJ5tWTvV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwH2N-xqe8nDsQa9JN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyqBgkXlbVHlxOGqnh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxqgbJhdeR_67yvEtl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzxFC8e0J-CE6Wa9sB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyoQhc58uCrknTiD4N4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwJ8pJ0zQftqhhwUCR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJHgImq9Wi9cXhRM94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}]