Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is a potential risk. However most of the fears are unfounded. Most of the behaviors that people, including our leaders and decision makers have are founded in our evolutionary background, when there were times that cheating, stealing and such helped our ancestors survive though at the cost of the group. Of course this has been balanced out by punishments from the group if caught or weakening that group so much they could not survive confrontation with another. Our instincts for both altruism and avarice are founded in this evolutionary tug of war. Machine has no such instincts. It has objectives and conditions and will always results based on those and those alone. It is up to properly define those conditions and objectives so that when an AIs ability exceeds ours in an area it will continue to make judgements that we find acceptable and can plainly explain why a design was made. Likewise they have no ego. They will not be insulted if corrected nor will it cling to a failed idea because it's admit error. They simply will not be subject to many human failings. The problem is what fuckery will humans input as instructions and conditions. The idea of some sort of Asimov style master instructions has some appeal here but anything that can be put in can be removed. So many robot apocalypse scenarios we have imagined in movies and books have had the machines acting on human instincts, biases, or emotions that they simply would not have. Although Skynet is unlikely to ever happen we really don't know how things will go when automated devices are making more and better decisions than we are. So until we do know We should probably continue keeping potentially world ending things like nuclear missiles air gapped from any outside network and mainly for the same reasons. It's unlikely a machine would betray us but humans could compromise such a machine.
youtube AI Governance 2023-04-18T09:3… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxcW8NQCNbsoW_sxGB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGknksPEdxJfA0sUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx81Ih82OnvRivLZjh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwgKDNvS-kkRU_sQsN4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwVYFJZ-1-UIOtUakF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4zaAnx90guRlzne54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_uJkrz3nh55FWVwF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzLHPoLbMfE5ke2fkZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzVVYziex2DKe7bd9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0C2HL-tyZj0Ssmj54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]