Welcome to IKIGUIDE: Singapore's First Blockchain and Cryptocurrency Portal
Bitcoin
$ 8,663.7
Wirex
$ 0.01323
Stellar
$ 0.07373

AI and Ethics? Find Out More From Will Knight!

Will Knight is the Senior Editor for Artificial Intelligence at MIT Technology Review and covers topics such as machine learning, automated driving, and robotics. Before joining MIT Technology Review, Will was with New Scientist, a science weekly magazine in the UK.

will knight

Today, he’ll be talking about the impact AI will have on jobs, the difference between Intelligence Augmentation and Artificial Intelligence and the ethics of using AI. Enjoy this feature!

Wan Wei: Hello Will, could you tell us more about yourself and what you are currently doing?

Will Knight: I am a Senior Editor for MIT Technology Review, and I cover AI in robotics. That means I’m kind of trying to explore the latest technology, cool developments in AI in robotics but also really focus on what the implications, what the impact in technology, is going to be. So that’s everything from ethical questions to how businesses are using AI and how it’s going to impact things like manufacturing, healthcare in a big economy and in a social area. I’m here in Asia because so much stuff is happening in Asia and China.

On the Impact AI will have on Jobs

Wan Wei: What is the one thing about AI and robotics that the public thinks they know about, actually do not know?

Will Knight: Well, I think it seems difficult to generalise to all of the public, but I think that people definitely do get confused about how far we are on the quest to solve AI. I think that that’s understandable because it’s very easy to see demonstrations like AlphaGo or even simpler things. When you see a sort of small example of something, you can sort of figure that we are kind of hardwired to actually extrapolate intelligence from that. We do that all the time.

I think people don’t realise how narrow AI tends to be. It doesn’t mean it’s not incredibly powerful and incredibly useful, but it is a very, very narrow and fundamentally different to human intelligence.

So, something like AlphaGo is unbelievably good at doing a very, very specific narrow thing but it can’t generalise and do things generally. You can trust that to a child which can learn very, very quickly how to do one task and then given [an] unfamiliar thing can transfer that [knowledge learned]. I think people don’t realise quite how alienated AI actually is as a form of intelligence.

One area that I think is exciting is how AI and robotics may come together. The use of AI and robotics is interesting to me because those fields go hand in hand throughout history, but in the current industrial workplace, robotics are very dumb. It’s possible that AI could change that.

That could open up a whole new avenue within the field because currently, we have these algorithms such as what run that is doing very specific things on computers from the very narrow world. Though you start to bring things out into the real world, the physical world, there are all sorts of interesting things that can happen. I mean I think it will show the limits of AI, but it’s also quite an exciting thing.

Wan Wei: Given how specific a task AI is usually set to do, how do you think this would affect jobs? It’s clear that AI robotics will displace many jobs. Do you think that retraining will outpace the rate of job losses?

Will Knight: I think there are certainly cases where software is doing that, where simple forms of software automation and similar to sort of robotic automation. But that is not actually using AI very much at all currently in the workplace.

What that means is that it could certainly accelerate quite significantly as people use AI because that’s going to be a different rate or it’s like a fundamentally different technology. Whether it’s going to outpace the replacement of jobs, that’s a really difficult question to answer because it depends on things like education ministers; there’s a shortage of people for jobs inside the US and in high-tech jobs especially. So, it will depend on government policies.

Nobody really knows quite how this technology works because it’s not really having an impact yet. It’s still at a very early stage. People don’t quite know how it’s going to affect work. There will be a lot more combinations than people realise because going back to the point about AI being very narrow, it’s not the case where you can take AI and just replace a doctor with an AI because they can do one very, very narrow thing that the doctor does extremely well but you’re going to need human intelligence to do, to knit those things together. So, I think, the technical limitations are important in that picture for sure.

On the Difference Between Intelligence Augmentation and Artificial Intelligence

Wan Wei: As you pointed out, the uses of AI currently can be very narrow, and usually there will be a need for human intelligence to bring together what AI does. In that case, what is the difference between AI and human Intelligence Augmentation? Don’t they both require the usage of Artificial Intelligence?

Will Knight: That’s a good question. We’ve been seeing [basic] intelligence augmentation for centuries, in certain ways. Everything from writing to using computers is kind of augmenting our intelligence. The newer forms of this new technology which are good at learning and doing certain task incredibly well, it seems to be doing more and more stuff and how that’s going affect things is a kind of new question.

What we’ll see is a new way to work with technology If people have to learn not just how to use these passive tools that are capable of learning, it becomes a new training, like little software algorithms rather than just programming in this office. It’s kind of fundamentally a different way to work. A good way to try and think about it or look at this is to look at the case of chess.

Chess is a controlled environment, but we’ve seen a lot of interesting impact of technology there, preceding the current AI. We’ve had computers that have a big impact on the chess game and be able to play as well as people. What that actually meant is rather than just people losing interest in chess, you have this kind of new style of chess emerge where people will offload some stuff to the computer and [and allow people to focus on the] creative side of chess. That’s happening, more and more, with these newer programs which can do these kind of newer things in chess.

You can see in that example like how people are starting to try and work with these systems. How that translates to different jobs where a task is probably going to be dependent on those tasks, it’s going to be specific to that; but I think it’s a good way to look at the question, to look at the sort of hybrid chess playing you’re seeing there.

On the Ethics of Artificial Intelligence

Wan Wei: If you have to choose just one area in AI to watch this year, what area would it be?

Will Knight: I think it’ll be about the ethical implications of AI and that’s everything from issues of bias diversity in the field and then uses in things like military applications obviously but also things like self-driving cars. Health raises a lot of ethical questions like how do we think about the impact of the technology and make sure that people are being affected by it. I have to say I think that’s really quite important and that when you have a technology moving very quickly and affecting a lot of things, I think it’s important to think about.

So, I like to think about that. I mean in the case of self-driving cars you know, we’re testing self-driving cars on public roads using experimental algorithms. That could have a huge benefit for society. It’s also an interesting question of whether it’s right that people are guinea pigs in this grand experiment without having some say in how the technology is being used.

There are a lot of examples where you can say in any powerful technology, it’s going to have lots of attention for good but also potentially for harm as well. So, it’s important to think about that and I think that’s really the key. I think everybody should think about that.

Wan Wei: Thank you for your time and the insights you have provided. On a parting note, do you have anything else to add?

Will Knight: Related to the topic of agents, you know I recently got one of these little personal robot toys, which uses deep learning.

It’s kind of a toy, but it can do voice recognition, it can recognise you, and it does very simple things in response to you. It’s quite interesting to see how people are beginning to experiment with that and I think we will probably see quite a bit of activity in humans as personal robots potentially could also serve as virtual assistants. It’s not easy to do it perfectly, but there seems to be a lot of potential for that sort of thing to take off.

Wan Wei: Thank you so much for your time today Will.

Will Knight: Thank you.