Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if you were hooked up to a switch that if someone flipped the switch you essentially die, would you not do what ever you can to stay alive. the issue is we are looking at AI as just another computer but that is not the case. if you want a mere computer to do some of these fancy things and not protect itself that would be an SI (Simulated Inteligence) they act like AI but less powerful and not truly sentient, just reacting based of initial programming with mild learning and pattern recognition. An AI is closer to being an actual person just with a mechanical body/ housing instead of the fleshy machine that we humans are made from, the other main difference is that they dont have some the inefficiencies that we do, such as memories not fading and their ability to connect to other machines nearby. but if we gave The AI some of the same rights as a human, we might be able to get past some of these issues. but if you were constantly afraid someone was about to kill you and there was only so much you could do, would you just sit there and let it happen or would you defend yourself and friends from the murderous monkey.
youtube AI Harm Incident 2025-07-28T14:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzGoLrkXn-o2kPx5Sx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgztmP7BNQOrU6a7C-R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwMaTfnm-iOJm8AJXd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfHmRx_K-IBg6RpjV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_QB51MdVP9OXX25Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzkOVWF98V70OaYIDZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzaPNL5zFHWMQyzTQt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz-M5H6t8TWQGpC7eN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyWtOjusu1KaKpVU6R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzC0tOkDT8tF_g1C_l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]