Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On AI's future and our safety. Great interview mister Tucker! I have befriended computers for 40 years and would like to share a few thoughts on the rising problems with AI, if you don't mind. Please bear in mind than the fabric of the universe contains a multitude of dichotomies (up-down, more-less, right-wrong, etc.). It is a mistake in logic to not consider both sides in question. The brainchild of men is AI and is no better than what we program into it, and what we want it to do. We must also consider what we instruct it, to NOT do. Azimov stated very useful guidelines with his Four Laws of Robotics. However, AI (like covert ops, military intelligence, hackers, covert collection of personal data and spying on citizens, as well as deep fake voice and faces) have trended more and more like a dystopic Orwellian 1984 story. What can we do to preserve truth, freedom and privacy? 1- Fight harmful AI with AI. Who, but AI is best equipped to monitor, police, identificate and even apprehend abusive AI? We already have firewalls. It's just a matter of stepping up the game of “hackers, counter-hackers”, to counter a new breed of digital threats. Additionally, we cannot become complacent with passive measures, waiting for the next hit. So, part of this AI vs AI is to anticipate the emergence of new potential deceptions, frauds, and to model counter-measures ahead of time... just in case. Again, this is not new. It has been cost-prohibitive to do so while involving many humans. Now with computers, once programmed that way, it will run (with little supervision) very cheaply. 2- We need to properly program AI, the same way we educate our kids in ethical, consequence-base reasoning. Concurrently done, we could: a) increase sensory perceptions and situation/environment awareness. *robotic hands feeling degrees of pressure *imagine your bot sensing a growing tumor in you b) build a (growing) massive memory data base of these perception and situation-aware experiences, each with some sort of digital rating earmark, indicating how well or how poor this item and/or this recorded experience contributes to, or hinders the achievement of the laws of robotics, or how suitable it is to perform its tasks and specialised programs. *Your bot “sniffs” undetected pollutants in your new home and presents you with a range of solutions. *Your bot “hears”, “assess” and records growing domestic violence in your neighbor, and run it by you before he proceeds with protective actions. This is AI's learning process. It is by far the most demanding step and has to run continuously. Not all “robots” will be created with great, comprehensive intelligence. So, there should be “limiters” to any software and/or hardware, depending on the work they are assigned to do, of course. *You should not worry that your “butler” could be a corporate spy and steal your engineering homework. The path to safe and helpful AI has a best analog in the raising of our kids, and the training of our pets and horses. Unfortunately, as we all know from human history, there are misguidance, misinformation, and quite a bit of confusions in the way of rational, ethical learning and thinking. But among several philosophies, we can cull out a few universal (workable) truths: A safe AI is an AI that clearly acts “for the greatest good of all, with the most constructive measures and the least destruction”. When we are unable to please everyone, this is the best formula. And: When we consider adopting or supporting an old or new law, bylaw, policy, measure, protocol, we can always assess it against the “Golden Rules” that originated from Confucius and the Buddha around 350 BC, among other authors. “Don't do to others, what you would not want done to you” and “Do treat others, the way you'd like to be treated.” Yes it's a tall order to impart an “analog” sense of ethics in a digital brain! But again, to a degree, we already see the embryo of this, in the global data-fusion network implemented by Tesla. Every Tesla monitors driving conditions and may report challenging conditions to the Head Office, who formulate updates for all Tesla cars, resulting in better self-driving safety and performances. Yes, Elon has a smart learning curve in play for his future AI. I really believe that, like we educate our kids, as they themselves learn ethical behavior (or will favor one side only, with criminal actions), by closely fashioning AI's algorithms that way, we will achieve a standard of ethical behavior from our bots, that is not only passing grade, but will be better than ours. A good realisation of this, is "Data" in Star Trek. Thank you so much for your channel and great reporting. Wishing you the best. Cheers!
youtube AI Governance 2023-04-19T17:1… ♥ 6
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyf3TwRAM-6f-a5oI14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx26urVwFcbsOA_QmR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwy9zQLh0PMwsJGAll4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz_dEqSSYW3gGGqHUx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx6Ekv8TBFHVWn5KDt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwIcf4ZDGGr8xrqOHJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxnk_P9fVB41T_tXNh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyqSrHzAIKvmQ-qwKN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyH1LCYVzJWiQ_uKGF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxStijrcKr25w5ZvNN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]