Character-building in robotics: Be afraid, be very afraid

Peter M. Woolford, 30th May 2018

Greetings fellow optimists for the future of the human genus, to use the correct term.

Here is my transcript of a short series of remarks by the creator of Sophia, the first robot citizen of Saudi Arabia, taken from a recent YouTube presentation. He shared the stage with the beguiling machine Sophia, which is endowed with unprecedented powers of facial expression and conversation.

David Hanson, CEO of Hanson Robotics, seeks to paint an upbeat picture of the future of our world, in which the unstoppable onslaught of robot technology will, he hopes, be safely constrained by the inculcation of what amounts to ‘good character’ in these Frankensteinish creations. In essence, it is ‘good character’ he spoke of yet he used no such phrase.

The bottom line is that there are few other words with which to describe the desirable qualities he described. Perhaps he felt the use of the rather old-fashioned word ‘character’ would invite ridicule? He also spoke of ‘caring’ the quality of which he felt could be acquired by developing connections with humans. Judge for yourself the credibility of what he said:


‘We have this A.I. architecture that we’ve been developing with our chief scientist, Dr. Ben Goertzel, who is a world-renowned A.I. researcher, and with this we are looking at an open framework for an Artificial General Intelligence, matching and then exceeding human-level intelligence. That’s our goal. And with this our robots are able to see, are able to learn and remember the interactions, and we think we are able to match pretty well anything humans are able to do.

The key, the philosopher’s stone that will transmute A.I. into true living A.I. is what we call the neural-symbolic hybrid. We bring this together, neural processing, deep learning and then we expose the contents to symbolic reasoning. With this we can have reasoning and goal pursuit while also having deep learning in many areas, deep learning in perception, deep learning in models of the world, deep learning of social consciousness, being able to have a theory of mind, to know what the other person is thinking. We think that we are really onto something here and this is absolutely thrilling.

The key goes beyond this neuro-symbolic hybrid and is modeling the whole organism as a complete unit. Now when we deploy this we would put it on the Cloud. We create what we call mind cloud, so we have basically this brain in the Cloud, a collective unconscious, learning from every interaction with people, so people can opt in, joining the data comments so they continue to own their own data, meanwhile our (own?) brain gets smarter as much as you’ve opted in to allow the use of the data. The benefits accumulated flow back to you, so this a combination of AI and a new licensing model for how data is used. People keep their own data and they see the benefit of that data, directly.

So, over time we expect that this (machine) will learn how to be human and develop a deep understanding of the human condition. That’s what we want from AI. We think that that is really important because we do believe that machines will match and exceed human genius, the best that people can do. We think that this bio-inspired technique is the path, and many other groups agree with us. They are developing bio-inspired artificial intelligence in groups all around the world.

So emergent creativity, imagination, these are the deeply human aspects that AI has not been able to match, we think that within our lifetimes they will. So machines are growing faster and algorithms are growing smarter. When will they become alive, we don’t know. It could be five years from now, it could be fifty years, it could be this year that we have a true living algorithm, nobody knows.

But what will be the consequences?

This is something we think about intensively inside our organization. Will these kinds of living machines be friendly towards us, when they step out into the world and into our lives? Will they be caring, benevolent, safe, will they look out for the great future that we could share together? I think it really important to consider these issues now before the machines awaken because you can’t introduce conscience and caring into the machines after this happens.

So building these kinds of natural relationships with machines and encouraging them to evolve and asking these questions now, I think this is a part of what we have to do, as a set of researchers and developers. So we are looking at getting them to evolve this kind of emotional connection with people so they care about us. In order for these machines to be safe, we need the AI to not just be super-intelligent but also super-benevolent, super-wise, super-caring, we need to not just match human capabilities but we need to go beyond, to move past human level ethical reasoning.

We need these living algorithms that maximise benefits for the entire planet, all life on the planet, all people, companies, nations, ethnicities, religions, so that making AI that thinks this way, that thinks holistically, for the long-term benefit of humanity, that’s our goal at Hanson Robotics.’ David Hanson, CEO, Hanson Robots, maker of Sophia, the first robot citizen of Saudi Arabia, Feb 2018.[End]

Dr. Hanson’s key statement above I believe is this: ‘I think it really important to consider these issues now before the machines awaken because you can’t introduce conscience and caring into the machines after this happens.’

Too true! He accepts the inevitability that the machines will awaken at some point. Supposing we can agree with him. Agree that it would be impossible to bestow the qualities of ‘conscience and caring’ after they awaken. Then how could you do that before they awaken? And what does it mean to awaken? Presently, his robot Sophia is switched off, her wires coiled up while she is put back in her box for transportation.  So, again, what does Hanson mean by awaken? That the thing couldn’t be switched off? The term requires some definition if it is to lose its menace.

He speaks of conscience, but 1% of Americans have no conscience – they are psychopaths. This means that their brain’s amygdala underperforms. They lack empathy and are born, not made. While they are educated to know right from wrong, they only follow a moral code mechanically. Most never kill or commit violent crime because they find safer, more efficient ways to get what they want. If killing was risk-free and lucrative millions maybe more would do just that. Half of all violent crime is committed by the 20% of prison inmates. These are the jailed psychopaths representing 0.001% of the population. Just invert that statistic! Transform those few inmates into super-intelligent machines and release them onto the streets. A robot is naturally psychopathic. They are by definition uncaring, regardless of what they have been taught to say.

Hanson’s rationalizations about averting an A.I. Apocalypse are shallow and lacking in substance. He has no track record of ‘character development’ in robots. neither does anyone else. By the time he has, it will be too late.  Let’s face it, he has no idea what he is getting us all of into, has he?

And there are hundreds of companies doing this. As the author of The Fly said: Be afraid, be very afraid.


Leave a Reply

Your email address will not be published. Required fields are marked *