The Future Of

Robots at Work (LIVE!)

Episode Summary

Will a robot ever steal your job?

Episode Notes

 

Will a robot ever steal your job? In this special, live episode of The Future Of, host Danelle Cross is joined by research fellow Giverny De Boeck, Associate Professor Jonathan Paxman and returning guest Dr Eleanor Sandry. 

The guests speak about their work and industry partnerships, before discussing the type of robots that already exist and setting the record straight on how robots could impact future workplaces.

The talk formed part of Curtin University’s annual Research Rumble event, which showcases future-focused university research and innovation. 

  • Background of the experts [00:52]
  • What is a robot and what is artificial intelligence? [04:47]
  • Which tasks and competencies will be replaced? [16:37]
  • Which new jobs will be created? [27:42]
  • Q&A
  • The question of liability [33:32]
  • Ethical issues and work expectations surrounding automation [37:35] 
  • How robots are enabling meaningful work [46:14]

Learn more

Connect with our guests

Giverny De Boeck

De Boeck is a Post-doctoral Research Fellow at the Centre for Transformative Work Design at Curtin University. De Boeck researches changes to workplaces in light of automation and other technological innovation, and how these impact employees’ work experiences.

Associate Professor Jonathan Paxman

Associate Professor Paxman is the Head of School of Civil and Mechanical Engineering at Curtin University. He has a varied work background in AI, robotics and technology. He is currently designing algorithms to count the number of craters on the surface of Mars and is assisting with autonomous operations for the Desert Fireball Network. 

Dr Eleanor Sandry 

Dr Sandry is a Senior Lecturer in the School of Media, Creative Arts and Social Inquiry at Curtin University. She studies human-machine communication and how automation can help to enrich our experiences at work. Her book, Robots and Communication, draws on her early research and theories into human interactions with robots. 

Questions or suggestions for future topics

Email thefutureof@curtin.edu.au

Socials

https://twitter.com/curtinuni

https://www.facebook.com/curtinuniversity

https://www.instagram.com/curtinuniversity/

https://www.youtube.com/user/CurtinUniversity

https://www.linkedin.com/school/curtinuniversity/

 

Curtin University supports academic freedom of speech. The views expressed in The Future Of podcast may not reflect those of Curtin University.

Music: OKAY by 13ounce Creative Commons — Attribution-ShareAlike 3.0 Unported — CC BY-SA 3.0 Music promoted by Audio Library.

Transcript

You can read the full transcript for the episode athttps://thefutureof.simplecast.com/episodes/robots-at-work-live/transcript.

 

Episode Transcription

Jessica Morrison:          00:00                This is The Future Of, where experts share their vision of the future and how their work is helping shape it for the better.

Jessica Morrison:          00:09                I'm Jessica Morrison. Have you ever wondered if a robot will steal your job? In this special live episode of The Future Of, host Danelle Cross is joined by three experts in robotic technologies: research fellow Giverny De Boeck, Associate Professor Jonathan Paxman and returning guest Dr Eleanor Sandry.

Jessica Morrison:          00:28                The guests begin by speaking about their work and industry partnerships, before discussing the type of robots that already exist and setting the record straight on how robots could impact future workplaces. The talk was part of Curtin University's annual Research Rumble event, which showcases university research and innovation. If you'd like to find out more about the event, search for "Curtin Research Rumble 2021".

Danelle Cross:               00:52                Okay, panel members, so you all come from three quite different contexts when it comes to relating to robots and automation. I'd like to start with each of you describing just in a little more detail the work that you've done in this space.

Giverny De Boeck:         01:04                So hi, I'm Giverny, I work for the Centre for Transformative Work Design. So what we look at specifically at the centre is basically the impact of automation and algorithmic management on work design. And when we're talking about work design, we talk about job resources, for example, control, relationships at work, skill utilisation, but also job demand. So talking about motional demands, workloads or vigilance in this case.

Giverny De Boeck:         01:30                And then what we study is the impact of how automation is designed, but also implemented on that work design and from work design on employees experiences. So could be, for example, their work meaningfulness, but also in terms of outcomes, the impact on performance, on engagement, on creativity and whatsoever. So that's basically our take.

Danelle Cross:               01:50                All right, thank you. Jonathan.

A. Prof Jonathan Paxman:                      01:52    So I'll give you a little bit of background on myself. My PhD was in control systems engineering from the University of Cambridge. And from there I went to Sydney and I worked in the ARC Centre of Excellence for Autonomous Systems, which was very focused on robotics and automations. Fast forward to Curtin, and here I've been involved in research topics like 3D mapping in underground environment. So very much aimed at robotics and automation in the underground mining sector, as well as applications of machine learning.

A. Prof Jonathan Paxman:                      02:27    We have a project with Professor Gretchen Benedix in earth and planetary science on counting craters on Mars using data from the orbiters around Mars, and as well moving into sort of remote and autonomous operations. So I've been involved with the Desert Fireball Network, which is a network of remote cameras, which are aimed at observing meteors and fireballs coming through the atmosphere and recovering meteorites on the ground, and associating those with orbits in the solar system.

A. Prof Jonathan Paxman:                      03:02    And we're now moving into satellite technology and space technology. And the aim there is to move into robotics and remote operations in space.

Danelle Cross:               03:11                Thank you, Jonathan. Eleanor.

Dr Eleanor Sandry:        03:13                Oh, yeah. Hi, I'm Eleanor. So I'm from the humanities. I guess that positions me slightly differently. I started my research into robots by doing my PhD. And I wrote a book called Robots and Communication, which probably gives you a pretty good idea of the direction I'm coming from. In my early work, I was really interested in robots that you might meet in lots of different spaces. So work within social spaces at home, maybe also in art galleries, as well.

Dr Eleanor Sandry:        03:38                And I've written work that looks at what we write about for science fiction, as well as how we think about robots that we meet in the world that we live in now. I'm particularly interested in the way that we don't actually need to make machines look like us or behave exactly like us in order to meaningfully interact with them or collaborate with them. So I like looking at lots of different forms of machines, in fact.

Dr Eleanor Sandry:        04:03                So my research has kind of morphed into something called "human-machine communication". And I've written about people's relationships with their cars, whether or not those are autonomous or completely not autonomous, okay? And I've also now spread across this idea of the robot not just being a physical thing, but also being something that we meet increasingly online in social networks. And then that introduces whole other problems to this question of: what it's like to interact with a robot?

Dr Eleanor Sandry:        04:33                I think that one of the things I'm most interested in is the potential for not replacing people at work, but for actually maybe improving their lives at work, giving them a technology that they can collaborate with meaningfully, maybe making their job more interesting.

Danelle Cross:               04:47                Fabulous. Thank you, all of you. Can you set the scene a bit for us around robots? So I think we all have different ideas, potentially of what robots are, what their types are and their uses. So perhaps, Jonathan, if you could talk us through: what is a robot and what are some of the different types?

A. Prof Jonathan Paxman:                      05:01    Yeah. So there are, I guess, a number of different definitions of what a robot is or what robotics entails. Traditionally we think about robots as being machines that are capable of carrying out some sort of complex task autonomously. More recently, robotics has been more focused on machines that are capable of interacting and perceiving complex environments, making intelligent decisions, and then performing actions within those environments.

A. Prof Jonathan Paxman:                      05:36    So, I guess I'm more interested in the robots that have more intelligence, and have greater capability of perceiving the environment around them. Because that enables them to deal with more unpredictable situations. So if I look back 15, 20 years, robotics was still recognised. But it was mainly recognised in fields like manufacturing, and on factory floors and things like that. And in those situations, you have robots which are performing repetitive tasks, in very precise ways. But often without a great deal intelligence or perception.

A. Prof Jonathan Paxman:                      06:16    If we look at what robots are capable of now, then we have much more the case where robots are interacting in unpredictable and complex environments. So self-driving cars on the roads are interacting in an environment where there could be pedestrians crossing unexpectedly, there could be a dog running across the road. And so robots need to be designed in which they are capable of handling those unpredictable scenarios. And that opens up a much wider world of possible applications.

Danelle Cross:               06:51                And what about artificial intelligence? So can you talk us through a bit what we mean by that, and its use cases and areas of work?

A. Prof Jonathan Paxman:                      06:58    Yeah, so I guess artificial intelligence, obviously, can be applied to robotics. When I think of AI or intelligence agents, I'm thinking about the ability of a computer or a machine to perform tasks, which are traditionally associated with intelligent beings. So humans or animals, that's often what we think of, but it doesn't necessarily need to involve the physical environment. So if it does involve the physical environment, performing actions, that's when it heads into this sort of robotics kind of territory.

A. Prof Jonathan Paxman:                      07:33    But, as Eleanor pointed out, often this can be in a purely virtual or in a purely data-based environment. So it can be analysis of data, drawing connections between different pieces of data. AI is very good right now, it's been a revolution in the last 15, 20 years, is very good at processing text and images. So we can give a photograph to a computer and identify trees and faces and vehicles and chairs.

A. Prof Jonathan Paxman:                      08:05    And this is really something that was considered to be a very difficult problem 20 years ago, but now it's very easy to manage.

Giverny De Boeck:         08:14                Yeah, maybe to pick in on Jonathan, what we look at when we talk about algorithmic management. So we look at how algorithms are actually managing employees. So where formulas are actually making decisions based on certain rules. And we see this happening in work context for examples in recruiting, when algorithms are basically screening applicants' CVs. So it's really the algorithm, the formula, taking over the decision making in a very concrete and impactful context, because this is really determining whether a person will get to that second round or not.

Giverny De Boeck:         08:45                So as real-life implications, we also see it in performance management, we see it in scheduling. So a lot of people's daily work lives are being implemented by these algorithms. And then also, when we talk about robots, we talk a lot about automation. So more the mechanical and electronic devices taking over operations of people. So really specifically looking at, for example, autonomous heavy haulage systems used in the mining sector, for example.

Giverny De Boeck:         09:14                I think we all know the trucks and that are very specific examples where we study how that impacts people's work design. So what kind of tasks are left for people? What do the machines do? And how does it impact how people make sense of their jobs? So really, in that context, I also see robotics as a very broad thing actually.

Danelle Cross:               09:32                Giverny, when we're talking about jobs, we've obviously been part of a technology sort of advancements in our work prior to now. Can you talk to us a little bit about historically how technology has impacted jobs?

Giverny De Boeck:         09:44                Yeah, technological advancement is not new, right? We have seen technological improvements in the past century. It is nothing new. And we see it in different domains. I think if we talk in terms of the work context, one clear changing moment was the Industrial Revolution. So in the 19th, 20th Century, where we saw autonomous systems being introduced specifically in these manufacturing environments. I think that's what Jonathan was talking about.

Giverny De Boeck:         10:10                So then I think immediately about the assembly line being introduced in Henry Ford's motor company. So we had this huge assembly line. And this is a tremendous impact on how people do their work. It basically changed how work was done from before having craftsmen who basically did the whole process and needed both cognitive and manual skills to do their jobs. When the assembly line was introduced, we went into what we call "scientific management".

Giverny De Boeck:         10:37                So this is the scientific management principles of Taylor, time and motion studies of the Gilbreth couple. So what you see happening here is that work becomes very much simplified. So we're going to split up work in separate tasks, and we're going to make it as simple as possible, and in how people do their work we're going to try to make that work as standardised as possible to basically have like a one size fits all solution, so we can have this ultimate efficiency. That is why they were doing those time and motion studies.

Giverny De Boeck:         11:08                So if the standardisation or the simplification, another important effect that that had in terms of work, is a divide between thinking and doing. So really, this is when management became relevant, because they put the thinking, the cognitive aspect with management, and then the manual thing was for the labourer. So you have this divide. And with the divide, interestingly, you also have a power shift, because knowledge is power. So knowledge went to management, and the manual labourers were ... Yeah, their jobs basically got a bit hollowed out from being the craftsmen they were before.

Giverny De Boeck:         11:38                So we see all of these changes. Interestingly, this was not an easy transition specifically for employees. So for example, when the assembly line was introduced in Ford Motor Company, Henry Ford was witnessing a turnover rate of 378%. That was huge. Basically, everybody left, because it was so difficult to do this job. People were not motivated at all. And this is when sociologists came in and they identified the problem of worker alienation. So people becoming estranged from work from the process of work and from the products of their work.

Giverny De Boeck:         12:16                And interestingly, actually Henry Ford to recognise that he said, "I would never want to do that. But there are some other people who would." He was wrong. Other people also didn't want to do that. It was boring. It was boring.

Dr Eleanor Sandry:        12:27                One of the things that people now say about autonomous technology, or so called "Artificial Intelligence". Artificial Intelligence doesn't have that much meaning to me, because what is intelligence? Then in those terms, I think that now people are talking about that type of technology coming into workplaces. I think the impacts have suddenly become much more obvious for people who are actually doing the types of thinking jobs. Where in fact you could do that job by data crunching, or you can do that job by looking at Big Data or whatever. Then suddenly have people who thought they were fairly safe in their job being ... I don't know, exactly whether or not they feel threatened. But I mean, in some ways, I think they do feel threatened.

Dr Eleanor Sandry:        13:12                And this is in across legal systems, in hospitals. Surgeons and the idea of robotic surgery. So it's kind of changed the complexion of what people are talking about when they're thinking about technology at work. It's gone a lot further.

Giverny De Boeck:         13:27                Yeah, exactly. Whereas in the past, it was mainly about manual labour. Today, it's also knowledge workers, it's professionals, it's surgeons, it's people, managers. We see it with HR. HR managers, their jobs are being replaced or not jobs, because I think we all discuss, it's not about the complete package, but some of the key tasks that make them who they are also in terms of professional identity are being replaced or done by technology.

Giverny De Boeck:         13:53                And that's a huge shift. Like Eleanor says: "You're not safe." Everybody will be impacted, there is no escaping, even us as researchers there is software that can write text. So what are we going to do if we don't write anymore?

A. Prof Jonathan Paxman:                      14:06    I mean, traditionally, when you talk about robotics and its impact on employment, you talk about what they call the 'four Ds', so jobs that are 'dirty', or 'dangerous', or 'dull', or very 'dear'. So very, very expensive jobs to do. This is why we often see robots employed in space and solar system. We've had some fantastic examples this week. The Ingenuity helicopter made its first flight on Mars. And that to my mind is, that is a perfect application for a robot, because it costs trillions of dollars to send, or to develop a program that's able to send humans to Mars.

A. Prof Jonathan Paxman:                      14:52    And there's just an enormous amount that can be accomplished with autonomous robots and tele-operated robots. So Ingenuity and Perseverance are doing a fantastic job up there. I do think that there are much more profound implications when we start looking at robots which are being applied in areas which are maybe not in those four Ds. Where there's maybe less justification for taking a purely autonomous approach to it.

A. Prof Jonathan Paxman:                      15:22    And I do think that governments need to wrestle with this issue. And I think regulatory and taxation frameworks need to take into account the impact on employment of really large-scale automation. And I think if there are large companies that are automating in such a way, that really drastically reduce their workforces, then there should be a kind of a taxation impact for that, which is employing people maybe in another sector.

A. Prof Jonathan Paxman:                      15:56    Because the way I think about it is that our society is always going to need people to do work. But maybe it's going to be in different areas, maybe we need more people in healthcare, maybe we need more people in education. But maybe we don't need as many people laying bricks, maybe we don't need as many people driving trucks. And so there's big issues there to tackle in terms of retraining. But I think we also need to be exploring where the industries are, where there's that pent-up demand for more people to be working in.

A. Prof Jonathan Paxman:                      16:29    I know that the classrooms could always use more teachers. And that will always be the case.

Danelle Cross:               16:37                Yeah. And I'd like to expand a bit on that. So reports by CEDA (Committee for Economic Development of Australia) and the Australian Computer Society have outlined that 40–50% of today's local jobs have a moderate to high chance of disappearing completely by 2035. So what will be the competencies, the tasks, the activities that robots could do?

A. Prof Jonathan Paxman:                      16:55    Okay, so my view on this is that probably 40–50% is on the highest side. I don't know whether optimistic or pessimistic is the right term to use. Yeah, so I think that's pessimistic. I think automation, autonomy, robotics is hard. It's expensive. I don't really see a scenario where it's going to become drastically cheaper to automate really complex tasks.

A. Prof Jonathan Paxman:                      17:23    There are kind of simple, repetitive tasks that I can see there being cost savings there. But I think that's possibly overstating.

Dr Eleanor Sandry:        17:30                Well, I think that's the other thing, isn't it? It's the fact that artificial intelligence and robotics is very much a space where people maybe overhype, what things are capable of doing. There are certainly cases where although an industry might want to automate something, or a workplace might want to automate something completely, there's actually a really good set of good reasons why they should step back and think about how the human element is a really important part of that workplace.

Dr Eleanor Sandry:        17:59                And that you actually, by trying to automate that thing you may overlook where that automation can go wrong as well. So there are so many examples of where something that is purportedly AI, trained on a particular data set doesn't actually work as is expected. We see examples throughout all sorts of things. But I mean, facial recognition is a big one that comes to mind where if you train your dataset with white male faces, it won't recognise black female faces.

Dr Eleanor Sandry:        18:29                And so there are so many unexpected things that can come about with these sorts of technologies that are machine learning. So they are partially programmed and then learning for themselves, but also the way that things are implemented. So something is designed in one space for a particular thing by people, by technologists, by roboticists and then it's moved into another space.

Dr Eleanor Sandry:        18:51                And you see people doing completely different things with it. And that often raises alarm bells for me, because I'm often wondering, what is going to be the impact of that? And how can it go wrong?

Giverny De Boeck:         19:01                And I think what you raised is really important. We feel like technology in general, it's not bad. It's not good. And it's also not neutral. It always serves people's interests, stakeholders in a company or broader. But it's really about what you do with that technology. How do you design it? For who? And how do you implement it? And what do you aim to do with it? And also, what is your definition of success?

Giverny De Boeck:         19:21                Because yes, you can implement technology to improve efficiency and effectiveness, but what is effectiveness? We're using, for example, technology, algorithmic management in talent management; we want to identify high potentials to make promotion decisions. But what we see happening now is yes, we do use our past data to predict who will be most successful in that position. But what is happening, we're using past data. Past data that is very highly biassed towards males, white males of a particular age, and we're reproducing that bias and what we're doing is basically cloning one Johnny into 100 Johnnys.

Giverny De Boeck:         19:56                So we have a company of 100 Johnnys and there is no Annabelle, there is no, I don't know, Mohammed, there is no anybody else. And that is a big risk as a company too, because if you want to be adaptable, and that's the big thing, right? In business, you want to be agile, you want to be adaptable, you need more diversity. So diversity should become part of how you define effectiveness. And then that needs to translate in how you design and implement these technologies. So, yeah.

A. Prof Jonathan Paxman:                      20:23    And that suggests to me that the important thing to focus on with respect to robotics or AI more broadly, is how we interact with it. So how people interact with autonomous agents, with robots. I think Eleanor's research is absolutely critical. Understanding how people can relate to robots. There's a lot of research now in cooperative robotics.

A. Prof Jonathan Paxman:                      20:45    Take an industry like meat processing, where the work there is really very, very damaging to people physically and mentally. And worker's compensation bills are higher than salary bills. But there's opportunities there for technology to be working in collaboration with people to reduce those physical demands. In the purely AI space, talking about all of this machine learning and deep learning that's going on, we still need people to curate data.

A. Prof Jonathan Paxman:                      21:16    If we're going to train an AI system to learn what a person's face looks like, or what a conversation should look like, someone needs to be curating that data and making sure that we're not embedding in those biases that might be present in just unmoderated data that may come through.

Giverny De Boeck:         21:38                So in this context we talk about joint optimisation, so it's really about optimising how machines and people interact. And I think that's exactly what your research is about, right?

Dr Eleanor Sandry:        21:48                Yeah. Yeah, it is. It is. Although possibly from a slightly weird direction. But yeah, that's exactly what it is about.

A. Prof Jonathan Paxman:                      21:55    Weird is good.

Dr Eleanor Sandry:        21:56                Well, you know, it's quite interesting. I think that a lot of people think that it's almost a simple thing to make human-machine interaction. We just make it like a human-human interaction. Well, let's just think about that for a minute. Like, how well do humans interact? That assumption is very difficult to break down, believe me.

Dr Eleanor Sandry:        22:16                But that's one of the reasons why I think that looking across different areas, thinking about automation and algorithms, what you're saying about. They're not humanoid. And they're not designed to operate as a human does. But what you need to do is to work with people to see how they can still work alongside that machine. How they can see meaningfully what say a robotic arm is doing, in fact, just from the way it moves.

Dr Eleanor Sandry:        22:43                There are all sorts of things that nonverbal communication theory can say about that sort of stuff. It's all to do with the speed you move, the direction you move, does it move some way like a human arm articulates, or does it actually do something completely different? And also, then there are the potentials like humans are very good at reading meaning into almost anything. And as long as you help support that meaning to be actually useful in a collaborative situation, then there's tremendous potential there for something to actually do things in a new way that might be much more useful to you.

Giverny De Boeck:         23:17                Yeah, and people are meaning makers, we put meaning in everything, even like non-existing things like you said non-lively things. Like when you get a stuffed animal as a child it means something to you, even though for some other people, it's just an ugly little duck. I mean, it can be a weird thing, but we do assign meaning. And when people are required to interact with technology at work, the same technology can be seen as a huge threat for them as a worker and everything they stand for, but that exact same thing can also be seen as an opportunity as something that actually helps them do a better job.

Giverny De Boeck:         23:52                And interestingly here I agree, it's not probably exactly the same as human-human interaction. But for example, there is research showing that when you can allow people to physically interact with robots, with mechanics, just touch it. That helps people to build trust and that trust actually enables them in turn, to give it a positive meaning and to actually start looking for opportunities of how this technology can help them do a better job.

Giverny De Boeck:         24:20                And so here we make a distinction. For example, when it comes to decision making, we make a distinction between 'automates' and 'informates'. Automate is when you're really going to replace a person. So you're basically going to cancel the person out. Whereas informate is really using the technology and the information it generates as feedback for the person and something to empower the person to do a better job. And so that distinction, I think, is really relevant.

A. Prof Jonathan Paxman:                      24:50    Yeah, it's really relevant in a lot of these machine learning and AI applications, because in some cases we're talking about an analysis of just enormous quantities of data. That would just be impossible for people to do in a manual way. So I gave the example of counting craters on Mars earlier. And so that's something that geologists have been doing manually for a long time. They'll look at an image, and they'll draw circles on the image to indicate where the craters are.

A. Prof Jonathan Paxman:                      25:23    And that information allows them to then work out the age of a surface. Because if there are lots of craters on a surface, it's old. And if there are very few craters it's young. There was a PhD student who had to do this for his PhD, in order to do his work. He counted half a million craters over the course of four or five years. We can now do tens of millions of craters in an afternoon. And this enables you to develop a just much more refined picture of what's going on.

A. Prof Jonathan Paxman:                      25:57    And there's no way we could employ people to do that manually. So we need to be thinking about the things that technology enables us to do, that we never could have done otherwise. Robots on Mars, we couldn't do that with people, at least not today.

Giverny De Boeck:         26:11                And at the same time, also look at it from a broader work design, because you can bring in technology to inform people and empower by giving them knowledge that is almost impossible to get at that speed. So for example, we also see this in the profession of lawyers. So now, instead of working with legal assistance, they have software to basically screen legal documents. So it goes much faster, they can process way more documents.

Giverny De Boeck:         26:35                But what is interesting here is that actually, so it's positive on that side. But it is also challenging in the sense that part of the status, part of the occupation of being a lawyer was having that relational design, where you were working in a team with these legal assistants, who now are no longer there, and you become the person who has to deal with the software.

Giverny De Boeck:         26:55                So your job is changing, but also the status, or how you make sense of it. So even when one aspect is positive, it really is beneficial for a company to look at it in the completeness of a job and not only as an isolated part, like what does it change as well? And how does it affect how I experience my job and identity?

A. Prof Jonathan Paxman:                      27:16    Lawyers are not the king of their domain anymore.

Giverny De Boeck:         27:18                No, there are the dethroned in a way, which can be good, actually. It can challenge power relations. We see this in hospitals with surgeons, where technology is coming in and they used to be the experts, they were the big shots. And now they cannot even read what the technology is saying, so they depend on technicians. That's difficult if you were always there, right?

Danelle Cross:               27:42                There will always be jobs replaced by technology, we've covered that and we've explored that also the opposite applies. So, what are some sectors where technology and automation could lead to the creation of new jobs? [Pause.] Stumped you.

Dr Eleanor Sandry:        27:54                I think one of the things that people often note about spaces where automation, particularly physical robots are going to be introduced, is the fact that a robot is not an infallible machine. They go wrong, they need to be maintained. And also their operation may need to actually be improved. So even if we are thinking of technologies that can, to some extent, learn for themselves, there's still elements of improvement that could always be there. There's still space for humans to be really quite high-level technicians, as well as needing to maintain them as well.

Dr Eleanor Sandry:        28:34                I saw a video from South Korea recently, just showing the interaction between that robot and the people around it. All it took was someone putting a tray back at a slight angle to mean that that robot could no longer sense its environment properly. And basically one of the human servers had to come up and just push that tray and then the robot's back on its way.

Dr Eleanor Sandry:        28:53                So there's, I think, more positive ways as well, in which people's workplace, people might find new jobs through this technology.

A. Prof Jonathan Paxman:                      29:02    I mean, I should say from my perspective as a mechatronic engineer, and within the School of Civil and Mechanical Engineering, we are training the next generation of mechatronic engineers. They're doing a Bachelor of Engineering and they're doing also computer science alongside that. So this combination of engineering and computer science is creating a whole generation of workers who will be at the forefront of developing those new automation technologies, maintaining them and operating them into the future.

A. Prof Jonathan Paxman:                      29:35    Some of our graduates are working for a company called Fastbrick Engineering [ERROR: Fastbrick Robotics], that is developing machines to automate brick laying technology. So that's a machine that could build a house in two days, but it's not going to totally replace a team of bricklayers because there's going to be engineers there that are operating the machine, there will be people who are rendering the bricks afterwards and who are managing the work side as well. But what it will do is it will mean that you can build a house maybe in two days instead of in two weeks or two months.

Giverny De Boeck:         30:12                Yes, I think it's actually both creating new jobs around automation and everything that's needed to support. But then at the same time, I feel ... And that's a big critique that research has had that predicts that 50% or more of the jobs is going to be replaced: it's actually more about tasks within jobs. So what we will see is that there's going to be a shift in tasks that are role-based and easier to automate, those will be replaced first.

Giverny De Boeck:         30:38                So, what's going to be left is tasks that are less rule-based, more non-routine tasks. And that's going to be an emphasis both in education as well as in companies where you see creative elements or imagination that are very ... We say "uniquely human", but that's always ... Yeah, I wouldn't go there for the discussion, but anyway, so creativity, imagination, everything that kind of requires that component will become more important for the future. So you will definitely also see shifts towards that kind of occupation being more central, having humans in them.

A. Prof Jonathan Paxman:                      31:12    The focus for engineering has always been really on problem solving, and solving complex problems is something that's always going to require a human in the loop. They might have automated technologies assisting them, they might be working with robots, they might be designing robots. But there's always going to be people at the core of solving problems. And I think that's important.

Giverny De Boeck:         31:35                And it also comes back to how much uncertainty is there in your environment. For example, we've been studying autonomous heavy haulage systems in the mining sector, in the Pilbara. But the Pilbara is a very unpredictable environment, you have the cyclones, there's heat waves, I don't know what, dust – things that technology doesn't like.

Giverny De Boeck:         31:53                So that's making it challenging to depend solely on technology as a company or putting yourself at a risk there. Because if anything goes wrong and you don't have the people, or the people have been very minimally involved with the technology, they won't have the skills nor the situational awareness anymore to begin when it's necessary. So also from a company, I think you really need to think about what your environment is and how much it requires adaptability. And indeed, how complex the situation is.

A. Prof Jonathan Paxman:                      32:22    I think robots are getting better at dealing with complex environments. But I'm strongly of the belief that there will always be a role for people in the loop to manage the robots, because there has to be someone giving the instruction. I don't think we're going to see robots acting completely by themselves. Complex, multifaceted work. There's going to be people at the core of that, that are directing robotic systems to solve particular tasks.

A. Prof Jonathan Paxman:                      32:54    There are automated dock operations around Australia and they are performing operations like moving shipping containers from shipside into stacks and from stacks onto the back of trucks. And that's all happening autonomously, but there are people in control rooms and there are people who are maintaining those machines, and who are scheduling the operations, and who are operating at shipside, and currently, there's still people driving the trucks.

Danelle Cross:               33:23                All right, at this point, we might throw it open for you guys. Do you have any questions for our panel?

Staff assistant:              33:32                We have an online question: what is currently being done to help future technology tie in with the liability landscape? There's a huge crunch at the moment where insurance underwriters have halved their willingness to provide professional indemnity insurance, anecdotally for any application that mentions automation, robotics, or control systems. Without alignment of these necessary industries, the jobs of the future are a bit hard to realise.

A. Prof Jonathan Paxman:                      34:01    That's absolutely a challenge, and it's a little bit out of my area of expertise but legislative constraints are a big problem in the take up of technology. Probably the only reason that we don't have a lot of autonomous vehicles on our roads at the moment, it's not a technical problem, necessarily. But it's a framework problem.

A. Prof Jonathan Paxman:                      34:26    I know you're thinking about ...

Dr Eleanor Sandry:        34:30                [Laughter.] There are a lot of people now who are working, so a lot of lawyers that I know who are working, on trying to help governments in different countries work out how to legislate this kind of thing, like who is liable? When your autonomous vehicle runs someone down, who is liable?

A. Prof Jonathan Paxman:                      34:49    That's exactly right.

Dr Eleanor Sandry:        34:50                Because basically, you've been told as a driver, potentially, that you don't have to be paying attention to the road. So is the person who manufactured that vehicle? Is it the person who sold it to you? Is it the person who moved one of the signal things on the road, so that the car didn't know what was going on? Is it the person who crossed the road in front of you?

Dr Eleanor Sandry:        35:07                These questions have certainly become difficult again, if you like. A question that someone needs to answer, again, and it's certainly not simple. And I imagine that within manufacturing industries, anywhere where robotics and automation is added, I imagine it's similar to what we see also with the roads. I do think that autonomous vehicles in particular are not yet technically capable of driving on most Perth roads. And I'm going to go on record as saying, because we have too many human factors on roads to make it very easy for that to happen, just simply.

A. Prof Jonathan Paxman:                      35:43    It depends a little bit on which technology we're talking about. I think Tesla's not ready. So speaking about incidents that occurred in the last few days, that was a misuse of the technology at an inappropriate juncture. But yeah, I think that's absolutely the case. We need to get a lot better than the average road toll, before it's going to be acceptable for autonomous vehicles to be just let loose, because of–

Giverny De Boeck:         36:10                And to pick up on that, I think we also discussed this earlier. But if you want people, individual workers to take or to accept accountability for a process, or a task, or an activity, they also need, at least, to some extent, control that process and understand what that process is. And I think that is a big challenge with automation, if it becomes so complex that we actually do nothing more than just monitor it, and we have no clue what's really happening. But then they expect you to intervene when something goes wrong, which is probably the most complex situation that possibly could happen because the technology couldn't solve it.

Giverny De Boeck:         36:46                How is that possible? How can you expect from a person to take that kind of accountability, when you basically keep them away from both the design of the technology as well as the actual using it? And that's a very tricky situation. You cannot expect people to take accountability for something they don't own, for something they don't understand.

A. Prof Jonathan Paxman:                      37:05    I think we probably haven't answered the question, but except to say that it's hard.

Giverny De Boeck:         37:09                It's hard.

Audience member...:     37:12                Thank you. As we've gone through this digitisation of systems and processes, it's enabled us to analyse those data sets, then optimise those systems and processes, and then ultimately automate. And there's a decision making that happens between that optimisation, the automation, and often that's around the savings to a company, bottom line, or what's risky or too risky to automate or not.

Audience member...:     37:35                I'm wondering if there are global ethical standards about what we should automate and what we shouldn't automate? I think, Eleanor, maybe you're best placed to start?

Dr Eleanor Sandry:        37:44                Oh dear, it's the word 'ethics' and everyone looks at me. People are increasingly seeing that there is a need to actually come up with concrete, helpful ethics guidelines that aren't just like airy-fairy things that you can't understand or implement.

Dr Eleanor Sandry:        38:02                I think that probably the most interesting attempts to do that, at the moment, are from the IEEE (Institute of Electrical and Electronics Engineers). There was a big push to write a document called Ethically Aligned Design. It's interesting to me because it was created in a way that really looked at what was happening around the world. So they involved experts, but also people who just felt they were being affected by these technologies as well, in actually reporting back and giving them ideas for what to put into these guidelines.

Dr Eleanor Sandry:        38:33                They have tended to write their guidelines about autonomous systems, rather than just merely labelling them as 'AI' or 'robotics'. And they've done that on purpose, to try to show that there's actually a whole range of technologies that are being referred to in multiple different ways that sometimes aren't that helpful to understanding what they do.

Dr Eleanor Sandry:        38:54                So I'd say that's where the biggest push is. There's also obviously stuff going on in Europe. In Europe in general, they're much more at the moment on top of ideas of trying to protect people's personal privacy, giving people a choice about what data is stored about them. And so there are various places that you can look at around the world who are trying to do more with this.

Dr Eleanor Sandry:        39:15                I think, in general, this maybe doesn't affect robotics so much, but in terms of the idea of big data in particular, the biggest issue at the moment is trying to work out how to legislate the large technology companies. And there are clearly beginning to be inroads made into those conversations, but there's still a lot of concern about how people are effectively held to ransom by large tech.

Dr Eleanor Sandry:        39:44                So, it's another one. There's really big problems. It's a bit like the whole legal perspective and who's liable for things? If there's work being done on it, there's a lot more work to do, definitely.

Audience member...:     39:55                I have a question, as more and more repetitive tasks and simple tasks become automated, humans are left with increasingly complex problems that they need to solve on a daily basis. Our workdays are continuously seven-and-a-half to eight-hour work days even longer sometimes, because we can do more. What impact on humans, I guess, the cognitive overload that we are experiencing? And has that been really addressed and reflected upon? Because I think the increasing mental health challenge that businesses are facing might be a result of that, that we're trying to do more and more, but there's a limit to what our brains are built and designed. So neuroscientists, not sure to what extent they are contributing to this conversation.

Giverny De Boeck:         40:41                Yeah, I think it's a really good question, we do see that. I mean, technology can both increase and reduce demands. For example, if technology makes it easier to process administration that can actually free up time to spend on more crucial tasks. Or for example, with nurses, if technology is there to monitor certain biostatistics, then that frees up time for them to spend with patients.

Giverny De Boeck:         41:05                So it can decrease demands, but it can also increase demands. And so in terms of increasing demands, we do see a lot of the demands that are being studied are around monitoring and having that sustained attention, which is actually really difficult for people like, we are not good at paying attention for a long time. Like this is already a very big challenge to have you all here for an hour.

Giverny De Boeck:         41:30                So what we see now is people have to monitor processes where basically nothing is happening for eight hours straight. Even though they're just sitting there, and some people would say, "Oh, you're not really doing anything?" It is exhausting. It is cognitively exhausting. So, really thinking about how you can, for example, introduce some variety or shifts or rotations to manage those demands, is definitely interesting.

Giverny De Boeck:         41:52                Another kind of demand we see happening ... For example, in call centres, so for call centre agents, there is this new software that basically monitors their emotions. So picks up emotional cues and it will tell the agent to be more happy, be more enthusiastic, basically fake it towards whatever client is there in function of what that client needs. This is again really exhausting for people, it's exhausting to be happy a whole day. Like you do want to be miserable for at least one hour a day or just, I don't know, not to have to smile.

Giverny De Boeck:         42:24                And so not giving people that chance, not giving them any kind of discretion or control, really hampers their resources. Well, this is where, if you talk about work design, we're always talking about a balance between demands and resources. Some demand, so we talk about tolerable demands, as a company, try to make your demands tolerable. But some demands are inherent, right?

Giverny De Boeck:         42:43                For example, thinking of community services sector, yes, there will be emotional demands, because you're working with people with disabilities, that's not easy. But then what we see is if there are high demands, also think about how you can create resources to give people the energy to cope with those demands, the resources would be giving them more control, giving them more autonomy, about the work methods, about the tasks they do. Giving them more relational resources, letting them work together with other people, to combat loneliness, isolation, skill variety, skill utilisation. There's different ways in which you as a company can actually help people, but it's definitely a challenge.

Dr Eleanor Sandry:        43:20                I think a bit more broadly thinking about your question, which I think also relates to the kind of expectations that come alongside an automation of a process is that you're actually going to almost work harder to keep up with the technology. This is something that we see even where you're not, say, being asked to deal with harder problems. But you see it in factories where people are working with cobots on production lines because of course they are subject to a lot of monitoring over how fast they're operating, and whether or not their operations are standard.

Dr Eleanor Sandry:        43:57                Now, there isn't really a simple solution to that, but the human solution, actually, is that businesses need to think about what they're doing in their workplaces. Technology isn't causing this problem. What's causing this problem is ... Well, I hesitate to say this, but clearly ... a kind of capitalism is causing this problem, because people are just interested in making more money and getting people to work harder and faster.

Dr Eleanor Sandry:        44:26                That as, we've actually discussed before, doesn't normally lead to a very healthy workplace in lots of ways. It doesn't lead to a very happy workplace, it certainly increases people's mental health issues, it probably increases physical health issues as well, I would say. So, I think that whenever we're thinking about technology, whatever it is, we actually really have to think about the human costs that might be involved. And someone somewhere has to take responsibility for actually looking after people.

Dr Eleanor Sandry:        44:55                That's a really human thing to have to choose to do, it's something where you're relying on humans in management to make those right decisions. So it's not a direct technology issue, but it probably is becoming something that people need to think about even more now.

A. Prof Jonathan Paxman:                      45:13    I was going to comment as well, that some of the things that we're automating are not necessarily low cognitive demand. So if there's like data analysis that's very, very repetitive, that can be quite difficult to maintain focus for a long period of time. And so if the introduction of automation and robotics actually means that people are able to do a more varied range of tasks through the day, that may actually improve their ability to focus and to maintain that.

Giverny De Boeck:         45:43                Yeah, and actually companies would benefit from that, because if you think about effectiveness and what you want to accomplish as a company, it's not only about improving at efficiency, but as we all said, humans will still be involved. So if the whole of your workforce is demotivated, not engaged, is not creative anymore, not behaving proactively anymore, that's going to have net costs for you as a company as well.

Giverny De Boeck:         46:06                So, if you want to make that assessment of whether these technologies were to take everything into account. I think that's basically what we're seeing here.

Staff assistant:              46:14                We've got two questions from online, we've got Khyati asks: how will robots assist us in building employability? And David asks: should robotics research and design consider human impact prior to implementation?

A. Prof Jonathan Paxman:                      46:28    I think the answer was yes. [Laughter.]

Giverny De Boeck:         46:34                I can answer the first one about building employability. Although a slightly extreme example, there are definitely cases where robots can actually allow people who would not otherwise be able to enter a workplace at all to be able to have a job and do something meaningful.

Giverny De Boeck:         46:51                In Japan, they have been able to allow people who basically are completely paralysed to actually take up meaningful employment by tele-operating robots. There are also examples where people are doing this simply through the internet, so using as a communication mechanism, that means that they can then take on jobs. So there are definitely opportunities there for robotics and autonomous systems to assist with that.

Giverny De Boeck:         47:13                And the other thing about, I suppose, thinking about intelligent systems as well, is that they offer potential for people to partially control something. So there's some really interesting research around, say, wheelchairs that are partly autonomous. So basically, they assist people to get around the place, they allow them to return to place, but they also allow them to be co-collaborators, co-drivers, if you like, of themselves around the world. Those sorts of examples of really close interaction with technology are interesting in terms of building completely new sorts of options for employability for people who might otherwise have been excluded in its entirety from the workplace.

Staff assistant:              47:55                We have one last question from online, from Emma. Emma asks: can the theory that robotics will take over jobs really be applicable when there's so much error and unpredictable situations that occur every day, which could never be programmed into AI. Aren't we just evolving the roles within careers rather than replacing them?

Dr Eleanor Sandry:        48:15                Yeah, I think that is exactly the kind of nuanced message we want to give. I think there's pessimists and optimists. And, as we said, there is definitely advantages to automation to technology, but there is also some challenges we need to be aware of, and managing that space is going to be the main challenge. It's not about everybody's jobs are going to be replaced, because indeed, there is some complexity we might not cover, or humans will always be necessary to some extent.

A. Prof Jonathan Paxman:                      48:41    The other thing I would say about that is that there are things that humans do well and there are things that technology does well. And one of the things that technology does well is to respond very, very quickly to a certain situation.

A. Prof Jonathan Paxman:                      48:54    So, I can imagine a situation where maybe not an entirely autonomous vehicle is on the road, but one that can detect and respond to an emergency situation much faster than human reaction time. And so, we need to be thinking about those opportunities where we can get the best of both worlds, both the human ability to understand a very abstract and complex environment, but also a computer's ability to analyse enormous amounts of data and to respond to situations very, very quickly.

Danelle Cross:               49:31                Well, that brings us to the end of the conversation. What a fascinating conversation. I've got another page of questions and I know that there have been more online and I'm sure you've got some more in the audience too. I wanted to thank our panelists. They have taught me a lot. I hope they have taught you a lot. This feels like the beginning of the conversation. Please join me in thanking the panel.

Live audience:               49:49                [Clapping.]

Jessica Morrison:          49:55                You've been listening to a live episode of The Future Of, a podcast powered by Curtin University. If you've got something out of this episode, please remember to share it and subscribe to our podcast. Until next time, bye for now.