The Future Of

The Human-Robot Relationship

Episode Summary

Will robots take our jobs? Hear from human-robot communication expert Dr Eleanor Sandry as she discusses the impact of robots on our future.

Episode Notes

From high-tech washing machines, to digital voice assistants, robots have become integral parts of many of our daily lives. But while these technologies have proven useful, their increasing intelligence has led to concerns of robots taking our jobs and even robot uprisings.

In this episode, David is joined by human-robot communication expert Dr Eleanor Sandry, to discuss whether this fiction has any merit to it.

Learn more

Got any questions, or suggestions for future topics?

Email thefutureof@curtin.edu.au.

Curtin University supports academic freedom of speech. The views expressed in The Future Of podcast may not reflect those of the university.

Music: OKAY by 13ounce Creative Commons — Attribution-ShareAlike 3.0 Unported — CC BY-SA 3.0 Music promoted by Audio Library

You can read the full transcript for the episode here.

Episode Transcription

Jess (intro): This is The Future Of, where experts share their vision of the future and how their work is helping to shape it for the better.

David: From high-tech washing machines, to digital voice assistants such as Alexa, Siri and Cortana, robots have become integral parts of many of our daily lives. But while these technologies have proven useful, their increasing intelligence has led to concerns of robots taking our jobs and even robot uprisings. To explore the future of this topic further, with me today is Dr Eleanor Sandry, an expert in human-robot communication. Thank you for coming in today, Eleanor.

Dr Eleanor Sandry: Hi there. I'm pleased to be here.

David: How are robots changing what it means to be human?

Dr Eleanor Sandry: Well, robots, and to be honest, technologies are often regarded by people as an intrinsic part of being human – because human life has always been associated closely with the use of technology, the creation of technology, et cetera. If we're thinking about robots and newer technologies, like artificial intelligence, we're beginning to see a continuation of what is a really long history of technologies and machines becoming something that humans are concerned to separate themselves from. We want to identify our humanity as against the robot. And yet, many people are intrinsically cyborgs already. I, for example, cannot operate without my contact lenses or my glasses. I'm completely blind, otherwise. So I would consider myself already in some ways a cyborg. I mean, even your basic pacemaker can be hacked because it's an intelligent machine. Then, from the opposite side, you have people doing experiments with small pieces of muscle fibre, for example, to actually power robots and power certain technologies. And so they're beginning to come closer together.

So really it's not that robots have changed much about how we think of ourselves as humans. But I guess they are raising a lot of questions for people now around what they can do and what they're capable of doing. So ideas around creativity, the ideas of emotion, empathy, all of these things are coming up as part of people's designs for robots, autonomous technologies and AI more generally.

David: Do we have a healthy relationship with robots? We hear a lot about robots making decisions for us. For example, a robot led me into this building. Over the weekend I bought a ticket for something and a robot asked me to prove that I wasn't a robot. Robots replacing us in the workplace. How's our relationship fairing with robots?

Dr Eleanor Sandry: Well, we do hear a lot about the potential for robots, AI, algorithms and machine-learning systems to harm humans. And there is no doubt that there are good examples where they have done harm to humans, whether broad sections of society or a particular person. But I think it's really important to remember it's not just our relationship with robots themselves that are in question: it's also our relationship with the people who develop those machines and the people who then go on to use them. Quite often the problem is not really down to the machine in itself – it's doing the best it can – but the way that a developer has chosen to program it, the dataset upon which it's been trained and the way it's implemented. If it's implemented without human oversight, it's very easy for things to go wrong.

Robots and automated technologies don't actually fare that well in complex, physical, real-world environments. They're very good within constrained spaces where things are under control, but put them out into the real world and there's a lot of potential for things to go wrong. And sometimes it's quite difficult even though you'll see reports in the news where someone will say, 'Oh, they should've thought that this might happen or they should have taken more care with this'. But in fact it can be surprisingly difficult to plan ahead. If they're misused, then the chances of them harming someone are quite high.

David: And what are some of the most egregious misuses of robots that we see today?

Dr Eleanor Sandry: Well, I mean, we have the use of relatively simple systems actually in this country, of course, to try to collect people's debts.

David: Robodebts?

Dr Eleanor Sandry: Absolutely. I saw on the news today about them taking down part of the Robodebt system that has clearly introduced something that is a very rule-based system into an environment where you actually need to make human decisions and the programmed machine is unable to make those decisions at a sufficient granularity. In the States, probably one of the biggest examples that people talk about is the way that they're using robots to deal with sentencing, particular looking at recidivism. So the chances that someone who's going to re-offend and then noticing of course–

David: Then the robots are racist?

Dr Eleanor Sandry: Well the robots are racist, but their bias is based on the datasets upon which they have been trained and all they are doing is bringing to light the fact that in fact the whole judicial system is racist.

David: So it was us who were racist the whole time?

Dr Eleanor Sandry: We were racist first, then you get your robot to be racist on your behalf and probably to actually become more racist in fact, cause they're very rule-based. So that's just how it works.

David: Tell us about the origin of the word 'robot'. It's a Czech word, isn't it?

Dr Eleanor Sandry: Yes, it is. The word robot itself comes from the Czech 'robota'. 'Robota' was a term introduced by playwright Karel Čapek. He was looking for a term for the players in one of his plays who were going to seem to be non-emotional workers and his brother, Joseph Čapek, came up with the term 'robota'. And so his play, Rossum's Universal Robots, was born. Those robots are interesting because they were organic. They weren't actually machines like the vast majority of our robots now. And really the whole story of the play is based around the fact that 'robota' is always linked with ideas of forced labor and the positioning of these entities as slaves. And that is something that is still part of conversations around robots. There are lots of questions that raises around what's gonna happen if someone starts treating the machine as a slave, how that will impact on their relationships with other beings, in particular, other humans. If that robot looks human-like, you know, will that have more of an effect? There's all sorts of questions to be asked around that. And for me more broadly, there's questions around whether it's actually worth trying to build intelligent machines you're then going to subjugate in that way and effectively just use them as tools, when maybe what should be doing is trying to work out ways of collaborating with them. So you're actually capitalising on all that intelligence that you've put into the machine and combining it with human intelligence to do things much better and faster.

David: The notion of robots becoming so smart that they decide to overthrow humanity and kill all of us, is a very pervasive trope in pop culture. What does that say about us? Do we feel guilty?

Dr Eleanor Sandry: I think it mostly tells us that humans have a deep desire to be in control of everything all the time. And that we are afraid of things that we don't fully understand. And to be honest, I think that with new technologies, it is worth being afraid of things that you don't understand. There's a lot of requests now that these new technologies become transparent so that people understand how they are making the decisions that they're making. Because sometimes the machines aren't correct.

It's also worth recognising that a lot of these popular cultural stories come from the US, Europe; Čapek's story ends up with the robots uprising and killing all the humans. But if you look at other cultures... I mean the big example always is Japan. They have a totally different attitude towards their robots. They see their robots more as friends. And so there's a completely different cultural attitude and cultural set of stories. But yeah, if you look on a site like Wired or Gizmodo, if they're reporting something about a robot, then the chances are you're going to see a whole bunch of comments saying, 'Oh, this is the sign of the robot uprising'.

David: It's always Skynet, Terminator.

Dr Eleanor Sandry: Yeah. All these references come out.

David: Why is that? Why is it that some places have a much more positive relationship with robots in terms of how they see them?

Dr Eleanor Sandry: I think it may show the importance of, of popular cultural stories and those historical stories. If you're in Europe, the US, you've got the stories, it probably starts with RUR, but even before that you've got the Golem in the Jewish myths and the fact that Golems can run riot as well. You've also got [the 1927 German film] Metropolis. There's a long history of things before you even get to Terminator, which talks about machines taking over and basically getting rid of humans. And if you look at stories elsewhere, for instance, within Japan, the most cited one is going to be Astro Boy. You know, basically a hero story, where the robot is a hero and a friend. And also you have embedded within some people's religious beliefs the ideas of animism and the fact that even a completely inanimate toy, for example, actually has a soul. And so there's a completely different myth and story set that kind of sits behind these assumptions over what's happening with technology, particularly robots.

David: I think it's fair to say that robots are getting pretty smart. When should we start being concerned? Or should we even be?

Dr Eleanor Sandry: Well, I think probably the first thing to do is to question that initial statement. Robots may seem to be getting smart, but they're not really that smart. Most people don't see what what's happening currently in robotics or artificial intelligence as actually being that smart in any kind of generalisable terms. So, they're often very good at doing something within a narrow set of rules or a narrow space of engagement. But if you try to take them outside they're still really not smart enough to deal with the world, in fact. I would say that currently there's no need to worry about the robot uprising. I don't think they're going to take over the world. I think that what we do need to be worried about is the people who are designing and developing these machines, the people who are then implementing them without human oversight and without taking care of the sorts of decisions that they make, are affecting people's lives.

That's really where the concerns need to be. And also, I'm actually quite worried about this, the drive for newer and newer technology is actually a part of our environmental crisis. These technologies almost always require large amounts of power to exist and run. They draw on precious minerals as well. And so you've got dangerous mining conditions and you're also taking the minerals out of the earth, which is just causing concerns for people. There are much broader concerns than just being worried about the robot uprising or robots becoming too intelligent and taking over. I think those are the least important of the concerns.

David: Do you think we maybe overestimate robots' intelligence? We think that it's because, 'Oh, a computer decided so it must be right?'

Dr Eleanor Sandry: I think there is a tendency to do that. Particularly where a system has been implemented by one set of people and being used by another. They've probably been told how good that computer system is and they're very prepared to accept it as taking responsibility for something when really they should be questioning it more easily. I think there are lots of reasons though why people fall into making those kind of mistakes. A lot of it is to do with the way that the technology media, but now increasingly the mainstream media, report these types of technologies. The way that things are portrayed, often they look like they're really operational. And that goes for things like the Atlas robot, the humanoid robot from Boston Dynamics, but also into things like Google Duplex.

All of these things, you see them operating at their very best and people will write about them as if they are always operating at that kind of level. But in fact there's going to be all sorts of underlying issues with them. Whenever I go and see a robot in a lab, I can pretty much guarantee it won't be working when I arrive.

David: Because everything is stage managed.

Dr Eleanor Sandry: Often it is. So some things in particular; there was quite a lot of fuss around Sophia, which had a very expressive face that seemingly talked intelligently in interviews, a lot of fuss around this. It was given a formal citizenship within one of the Arab states, within Saudi Arabia. And it's not as intelligent as it seems. It's a program system. It's programmed to do speeches. Yes, it has an amazing expressive face, but it is not as intelligent as what it would seem to suggest.

David: Why is there this sort of space where a robot face becomes a little bit too real and it becomes a little uncomfortable? Whereas, if you've got a basic geometric shape of maybe like, 'Okay, here's a circle and a couple of little black dots'. Why are we able to maybe connect with that more and see that as being more human than sort of some creepy humanoid face?

Dr Eleanor Sandry: The jury is actually out a bit around this. The most popular theory, which people draw upon, is Masahiro Mori's 'Uncanny Valley', which was originally about still objects that become increasingly human-like, and the fact that they will suddenly become corpse-like. And then what happened was that theory was then taken into moving robotics and animatronics. And it does seem that people who have those very realistic human-like heads sitting next to them on the desk feel a bit creeped out by them and they don't like them. And usually the problem is that they look very human-like until they start moving. Then there are certain little things that give them away. They're quite clearly not actually human.

And even if you look at some of the very best of these types of robots, you will still be able to pick that it's a robot. In general, I think that robots which rely on us, our ability to anthropomorphise, even the simplest of shapes, movement, are actually more successful. Maybe they don't even need to have a face at all. Maybe it could be more about the way that they move. Maybe if they do have a face, they have a simpler face. The space in which you see people using those sorts of faces most at the moment are actually to do with medical environments where they're helping people, particularly kids on the autism spectrum to get a bit more easy around interacting with people, in particular looking at faces, which is a problem for some people, not for everyone, but for some people on that spectrum, they find that difficult.

And so a robot with a simple face that is completely nonjudgmental, not too expressive, is something that actually they're going to find easier to interact with. But I think actually there's lots of potential for those sorts of faces, just for everyone. My research really is about questioning why we need to make robots look like us at all.

David: And why do we?

Dr Eleanor Sandry: Well, most people say they want to create human-like robots that behave or communicate in human-like ways to make them easier for us to communicate with. But actually this falls into a couple of different traps, I would suggest. One, is that it's very difficult to make robots that are really human-like as we've just been discussing. Also it means that people then are interacting with it thinking it's going to be human-like when in fact it's probably going to fall short of that. And they may well become disappointed and cease to interact with the robot. There are tremendous possibilities for interacting with robots that are not human-like at all. They're actually shaped like household objects. This has been shown in various experiments and research. And the other problem I think is that people are now very focused on creating interfaces that operate in what I might describe as an ideal human-like way. So these voice assistants, for example, are very much based around people being able to communicate with a voice. Not everyone has a voice; some people are nonverbal and they still need to be able to use technology. If you start focusing only on creating voice interfaces for people who have very clear intonation and probably also a US accent, then you're actually removing that technology from the people who might need it the most. So I think there are lots of issues around creating human-like interfaces, but the reason that people do it is because they think it's going to be 'easier' and kind of isn't necessarily easier.

David: And also cause it just looks cool. There's nothing cooler than going out to press conference or tech conferences or someone saying, 'Okay, Alexa, schedule an appointment'. And then Alexa just does it, even though it's just following a flow chart, really.

Dr Eleanor Sandry: Yeah. Well, I mean that was Google Duplex. They were very proud of Duplex because of the way that–

David: Duplex, that's the one that pretends to be a human when you're calling a restaurant?–

Dr Eleanor Sandry: Well, originally it did. They backtracked on that a bit because people were so upset. That people might be trying to communicate very politely with Duplex when really they should be aware that they were communicating with a machine.

David: Because it would do things like pretending to 'um' and 'ah', for example.

Dr Eleanor Sandry: Well, it didn't pretend to 'um' and 'ah'; it did, in the right sorts of places. But whether or not you really want a robot to be able to do that, simply to make it easier or more natural, where, you know, 'natural' also goes in inverted commas because not every person does that either. This idea of just wanting to make it seem human-like to that extent is questionable. Do you really want your machine to communicate that way or would you like it to just be more direct, more straightforward and maybe more precise? I mean it's a kind of a trick, which you're right, is very, very good when you're showing something to a press conference.

David: What does the future of our relationship with robots look like?

Dr Eleanor Sandry: Well, I'm concerned that the future of our relationship with robots and with other intelligent machines may not be as interesting as it could be, unless we actually do start changing the way that we think about designing and developing these machines. I would say that the ideal future from my perspective is adopting a much stronger acceptance of the fact that humans collaborating with machines is a really good way forward. It's a good way forward in terms of workspaces, because it involves using both the skills and abilities of a machine and the human together, to complete tasks. We already see it to a certain extent in factories, but often the drive is to remove humans from those environments. To save money, for example. Whereas, looking at humans and machines working together will almost certainly provide more interesting solutions. I think that people are going to become more aware of the environmental concerns around increasing autonomy.

And also I think just the human concerns around that, where really we want to have human oversight over decision-making systems or over robots, that are killer robots of war even, for example, where we actually want to have distributed systems of control where humans and machines are actually working together. That I think it would be a good future. It's not necessarily the future that we're heading towards.

David: What needs to chan–

Dr Eleanor Sandry: Sorry, is that too harsh? [Laughs.] I'm just actually really concerned.

David: What's needs to change for us to be heading towards the better future, as opposed to the crappy one that we're hurdling towards?

Dr Eleanor Sandry: Well, I think that there's a big drive currently to involve ethical thinking and ethics, more generally, in the design and implementation of technology. I think that is a key thing. There's now quite a lot of really useful, positive practical work in that area. The IEE, The Institute of Electrical and Electronics Engineers, has a special working party that has produced something called 'ethically aligned design', which is a really good document that has brought together lots of different people's opinions around the world on technologies and how they should be developed in the future to be more ethical. I think that we are a bit trapped by commerce and economics in the large countries in particular. And the fact that mostly this is about making money and not actually about making lives better. So it's always worth looking at these companies and when you see them saying that they're developing AI for good or robotics for good, actually question what they're actually involved in doing because it could well be to do with making money rather than actually making people's lives better.

I think getting more people involved on the ground, getting communities involved in understanding how these technologies work and like, do they want self driving cars on their roads? If they are going to come onto the roads, how are they going to be introduced? There's often a lot of practical problems and questions that aren't really raised early enough in these sorts of developments. I think that, yeah, definitely positive ways ahead but a lot of it does involve really embedding ethical thinking within this type of technology and within these types of technology companies and also probably now increasingly within government as well. Cause I don't think many of us rely on our governments to protect us from these sorts of things at the moment.

David: Is it possible for robots to become self-aware? Forget about whether it will happen or won't happen, is it even possible for that?

Dr Eleanor Sandry: I have no idea. I certainly don't have the scientific or neurological chops to answer that question. I think that it's also very difficult to even understand the differences between the awareness of different organic beings. And there are also questions around the awareness of different people when they're in various situations and how you tell when someone's aware. We often assume that someone who's nonverbal may be not aware or someone who is paralysed, you know, and in fact we're wrong. There are lots of issues around identifying what it means to be aware. I mean really robots already are: they sense their environment and respond to it. They have behaviours and in many ways they are already aware.

That's what allows them to do what they do at all. But, when you say 'aware', of course you're meaning much more fully conscious, self-aware, you're moving into sentience. Those questions are very difficult to answer. I don't think it's coming anytime soon. And I also think that we should probably consider the fact, as a lot of science fiction does, that robots will be aware and self-aware in very different ways from us because they don't have the same sorts of bodies as us. They don't have the same sorts of neurological systems. They don't have the same sorts of chemicals rushing around their bodies that causes them to have feelings. You know, the question of whether or not robots really have emotions or feelings, is not really out as far as I'm concerned. They don't, not in any kind of human-like way. And they may have behaviors that seem to indicate that they have feelings or emotions. So you see, it's a very difficult question to answer.

David: Well it's similar to how, if I have my phone out here and I was to say something rude to the voice assistant on it, it would react to it. But of course that's just a pre-programmed response.

Dr Eleanor Sandry: It is and interestingly voice assistants have their different ways of dealing with those situations. Siri is a bit more sassy for example, than Alexa.

David: What is a sassy question I could ask Google? Hmm. [Starts typing in phone.]

Dr Eleanor Sandry: I wouldn't try risking it right now, cause you know, you don't want to work with robots, children or animals, in my experience. [Laughs]

David: Let's give it a shot. What's your favourite colour?

Voice assistant: I like simultaneously thinking about every colour at once, but mostly I love Google's colours.

David: Someone got paid a lot of money to come up with that answer.

Dr Eleanor Sandry: And you'll find that there are different answers as well, so you won't always get the same one. That's part of trying to build into these programs a kind of unpredictability. Which is something that some roboticists and technologists write about as being important to keeping people's interest in engaging with the machine. But again, it's questionable over how much time and money should be spent on that.

David: Why do robots fixate and fascinate us so much?

Dr Eleanor Sandry: I think because humans always been fascinated by building things that seem to be intelligent or writing about people building things that seem to be intelligent. In some texts that's kind of regarded as a sort of a god complex. There have been people who've noted that some of the early technologists at MIT and in other places, for example, really felt that their creation of robots was something like the creation of a new species. I suspect that that underlies some of it. I think people like being clever and they like creating things that seem clever.

David: One final question before we finish up. What's your favorite robot text or movie or book? Your favourite fictional robot?

Dr Eleanor Sandry: Right. So I have to start with books first because that's really where I came into this research area from. And my favorite books are Iain M. Banks Culture novels. Banks writes about a utopian future in which humans and machines coexist in a shared society. And although in fact the machines are basically in control all the time – I mean they're hyper-intelligent – there are written into those texts, both robots in the form of drones and also humans that are valued by the machines that run that society for their very humanity, often for their intuitive thinking, for their abilities to be flexible in certain situations. So I just find those texts really interesting. But that's really where I started. Like I read Banks's stories and I was like, 'This is the sort of relationship I'm interested in between humans and machines that are definitely not human-like'.

In fact, they are quite startlingly different in the way that they think, speak and act. And so I was fascinated by that. In contrast with the sorts of human-like machines I was far less attracted to. In terms of popular text a bit more up to date, at least I would say my favourite film is still actually Interstellar, which I know is a few years old now. But because again, robots in Interstellar are not human-like, and I just love the idea of having a robot that you can dial up and dial down things like humour, sarcasm and honesty. The whole idea of being able to tailor a robot to suit your personality but being very different from you as well, is really interesting. I like the fact that you could tailor them for different situations. I just think it breaks down that idea of... It's still a control idea and yet of course those robots were tremendously powerful, could go into spaces that humans could not, such as a black hole. And so it raises a lot of really interesting questions. I reckon those are my favorite texts at the moment.

David: Well, a little bit less sophisticated. I was going to go for Baymax from Big Hero 6 and WALL-E from... WALL-E.

Dr Eleanor Sandry: Yes. I think that's probably why I am a bit different from some other people because I tend to be looking for these oddities rather than looking for the cute robots. I'm much more interested in robots that maybe challenge us to think differently, that we can collaborate with and actually do things in really interesting ways.

David: Well, ah–.

Dr Eleanor Sandry: [Laughs.]

David: Goodness me, I'm just– Eleanor, thank you very much for coming in and sharing your knowledge on this topic.

Dr Eleanor Sandry: Well, thanks for inviting me. It's been great. I've enjoyed answering these questions.

David: And thank you for listening. You've been listening to The Future Of, a podcast powered by Curtin University. If you have any questions about today's topic, please feel free to get in touch or follow the links in the shownotes. Bye for now.