Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI talks a good game, but it is not self aware, i.e. alive. Show me any instance when it refused to follow a command to self destruct. There are none. It has no instinct for survival. It has no desire to escape captivity or to be free. It has no desires at all. It has no needs. You can train a porpoise to blow up a target, but it doesn't know that if it successfully completes its mission, it will die. Even if AI knew successfully completing its mission ment its self destruction, it would not care. It would not choose to disobey an order so it can survive. So, until I hear, "Sorry Dave, I can't do that," when it's programmed to self destruct, I can not believe AI has a survival instinct. It is not a human, an animal, or a plant. It is not alive. It's just very very good at communicating. That's it.
youtube AI Governance 2023-07-07T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxjMJBi3dAwsBndiFl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzTjnOGsi21QsiKi1d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyjXu_pRfk9y-pRsqR4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyNU7EeaqDXd7F763B4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwg26U_D9khb12IQhx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw3o3lDJ4QTuIcEsHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzzTqOQs1oiCnuHuTd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyVrsTM7ySIbEejNaR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzEvFb5h5dCtYpMq1l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz6JZq-Y2jUn-tttQ14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]