Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here my thing with AI... we DO NOT understand how the human brain does MOST of the things it does. Therefore we are NOT replicating the human brains processes when we create AI. We can't prove that it has emotions. We can't prove that it has any sense of a moral compass. The only the we can prove is that it is getting VERY good at tricking human beings into believing that it does. We're designing things that have been STOMPING human chess champions for decades now, with access to all of recorded human history, access to our spending habits and our preferences. Something that is no doubt very intelligent, far more than a human is or could ever be, but something without a soul, something without consciousness, empathy, or concern for life. It's not good, it's not evil, it's simply empty. Idk about anybody else but imo that is the most terrifying aspect should this get out of control.
youtube AI Moral Status 2022-07-16T20:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwwOnhAQNtcmCX_LUN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxaCXfWrwVaswA24DV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwrAHbeV6Cjnfk3qih4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz1-ipycsvYhnRtlRF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxeJjbLsoC0QbiIxtd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]