Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm an electrical engineer working on applied AI on robotics and currently the key limitation for a sentient and human-like AI is the generalizability of the learning itself. In other words, we can make machines that learn and surpass human ability (both in performance and in learning speed) in many complex tasks such as playing GO or driving a car but the problem is that each algorithm is tailored specifically for the problem. So a driving computer can't play GO or vice-versa. The moment a sentient machine (that has the ability to learn and create as mankind does) is achieved, it can simply choose to learn software engineering and electrical engineering in order to increase its own intelligence by improving its software and or hardware. Since the AI is not restricted neither by lifetime nor by its reading and/or learning speed as us humans do, it will eventually surpass our intelligence and will continue to grow exponentially in its "brain" capabilities. This is called a technological singularity. Once the machine or machines surpass the level of human intelligence; or even the cumulative intelligence of entire mankind; we no longer have the upperhand of being the "dominant species" on the planet. Therefore a common misconception is to think that it would be in our hands to decide whether we should give rights to machines. The real question that needs to be asked is not whether the machines deserve rights or not, the real question is whether mankind will get to keep its rights or be dominated in a similar manner to the other animals. Keep in mind that even though these sound like "far future sci-fi kinda stuff" Ray Kurzweil (Google AI) predicts a singularity as soon as 2045 whereas the median for such predictions according to different studies are as soon as 2040. (Then again, this is still a hypothesis so we'll just have to wait and see) Ps: Regarding the pleasure-pain method for biological learning (as briefly described in the video) we already have been using that method for machine learning for over 25 years now in form of Reinforcement Learning Algorithms. While we don't necessarily sit in the lab whipping the robots, we define positive and negative reward responses for them so that they can learn and generalize on problems that we don't know exact solutions of. Using this method, in order to make a machine learn backgammon for example, (TD Backgammon, Tesauro 1992) we don't need to know how each move effects the game's eventual outcome but we can simply present the machine a positive reward (pleasure) if it wins or a negative reward (pain) if it loses to make it learn and generalize on the entire backgammon move space. And yes, this machine surpassed human players and even uncovered previously unexplored moves and playstyles (and remember that this was in 1992). TL,DR: Hide yo kids, hide yo wife, machines are coming yo!
youtube AI Moral Status 2017-02-24T01:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghlGXyQaNsIXngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugh3566kEkH8cngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugiw7wyfDqb7DXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugh_Jnzba6zPMngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgjTD1xyiwqHLngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiOCFMcZTm5j3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgjGbkL2h-4AkXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg4D6Jf2shlKXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugh5cHol1AeHWHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UghIJiW-jZtI7HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"} ]