Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Eh. I'm not against Robots taking my Job if I can go home to a guaranteed minimum income at the end of the day. I'd rather just sit back and fuck around if that was what was better for society. Also, I am not necessarily against killer robots, depending on how transparent their programming is. If a robot is clearly programmed to kill someone if and only if it follows a burden of proof wherein you can prove that the person probably needs to die and they can do it without violating a good set of laws for war, then that would be a good thing in my estimation. Humans can be far more easily persuaded to break the rules of conduct. Because we are driven by our emotions and can act out of malice rather than necessity, and we can be persuaded by the promise of terrible retribution to do something bad. Robots can't feel hate, and they cannot suffer retribution. This makes me ambivalent about 'autonomous' robots (they aren't really autonomous when we program them ourselves and they can only act within the parameters of our programming." in warfare. It could be either a bad or good thing. I need to wait and see how we handle them. Although. judging from our pride and greed, I would guess we would probably program them with malicious intent. Although I am not really sure if we did program them maliciously that we would actually kill more people. My guess would be that we kill about as many people as we do now, and just save some money on pilot training.
youtube 2015-08-04T23:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policyliability
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UghtSkBgzSYBtHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgjViNNXfNfSJHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UggmA0mXDPRJZHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjCKuvjORfp8ngCoAEC","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UggRPYH0T4jMPHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggAVsZqHgrQLHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UggiVbomHzBmy3gCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugh3w9U0giWCwngCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjfNG0lGF6WFXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugi1I3DCzAfkyHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]