Welcome to IKIGUIDE: Singapore's First Blockchain and Cryptocurrency Portal
Bitcoin
$ 10,012
Wirex
$ 0.01657
Stellar
$ 0.06924

Regulating Artificial Intelligence: Interview With Professor Simon Chesterman

Professor Simon Chesterman: Law should encourage us to think carefully about the consequences, think carefully about who bears the risk, who bears the harm, who bears the loss

Today, we’re honoured to have Prof. Simon Chesterman, Dean of the National University of Singapore Faculty of Law and Editor for the Asian Journal of International Law. Prof. Chesterman was a speaker at IoT Asia, and today he will be talking about the challenges facing the Law on regulating Artificial Intelligence systems.

Wan Wei: Hi Prof. Chesterman, thank you for accepting our interview today. Could you tell us more about yourself and what you are currently doing?

Professor Simon Chesterman: My name is Simon Chesterman. I’m Dean of the National University of Singapore Faculty of Law, and my research has long looked at public authority in challenging situations. That has included countries that are falling apart or under threat of terrorist attacks.

More recently, I’ve been looking at artificial intelligence systems that don’t fall neatly into our traditional means of governance and our tools of regulation.

Professor Simon Chesterman: On Regulating Artificial Intelligence

Wan Wei: As you mentioned, regulation sometimes struggles to keep pace with new technology. With regards to Artificial Intelligence systems, what are some of the legal challenges when it comes to regulation?

Professor Simon Chesterman: Artificial intelligence is a very broad concept. It refers to a whole bunch of technologies, but I group the challenges it poses into three categories.

The first is that AI systems can do things at a speed that departs from the way in which the law is normally able to handle activities in the public space. Whether it’s competition law or high-frequency trading, the speed with which artificial intelligence enables decisions to be made can pose regulatory challenges. However, those are mostly practical challenges — they’re not really conceptual challenges to the model of regulation.

The second challenge is autonomy. Autonomy refers to things like autonomous vehicles, the possibility of autonomous weapon systems and government decision-making by algorithms. The idea of machines making decisions independently of humans raises real challenges such as when we think of how we allocate losses. If an AI car, an autonomous vehicle, crashes into you, who’s responsible? It changes how criminal law operates. If an AI vehicle speeds or runs a red light, who’s responsible? What does it mean for criminal law in those contexts? These are practical consequences or practical challenges because the issue is – how can you use the law to manage risk to allocate loss. It becomes trickier when some decisions we intuitively think should be made by a person and not by a machine.

An example of this is a case recently in the United States where there was a sentencing question. This is a guy who was found guilty of a drive-by shooting. How long should he be sentenced for? The judge, as was often the case in Wisconsin, used software called Compas. Compas made a recommendation, and the judge went along with it. Now, if the judge makes the decision then it’s defensible because the judge presumably has taken into account all the factors. However, if a machine makes the decision, even if it’s taking into account all of those factors, for most of us at least, there is an intuitive resistance to giving over that discretion.

Consider, also, autonomous weapon systems. The idea that in a battle, a machine should be entitled, should be empowered, to decide who it kills — intuitively, to many of us, that feels wrong. We might not necessarily want a person to be making those decisions either, but it feels odd, it feels wrong that a machine is delegated those powers. So that if you like is the second area of autonomy.

The third set of challenges is in what I call opacity. The inability to understand why and how a decision is being made. That becomes a problem where not just the decision, but the reasons for the decision are important. For example, in some practical situations like whether a medicine works, we don’t really care why it works. Imagine a statistical trial that shows that if you take this drug then 95% of the time you’ll get better, 5% you’ll get worse.

Maybe it’s a risk worth taking, and we might not really care exactly what happens at the molecular level — that’s true for a lot of medicine. When it comes to law, however, the reasons why a decision is made are almost as important as the decision itself. In law, I think we really do need to understand why and how decisions are being made when they affect the rights and obligations of individuals.

Wan Wei: As a layperson, sometimes we’re not interested in the legal debates and the technicalities. We see the law; we follow the law. What is one area in which you think a layperson should know about the law regarding Artificial Intelligence systems?

Professor Simon Chesterman: The main thing is that the law, for the most part, can deal with AI systems and that AI is a little bit overhyped with the idea that robots are suddenly becoming conscious and that they have rights. I think we’re a long way from that. That remains the realm of science fiction.

What AI really means in many cases is just highly developed software, software that is operating faster, that is able to operate more independently. It might be harder to understand, but for the most part it doesn’t actually change liability. For example, on autonomous vehicles, even though you might have an autonomous vehicle, you’re sitting there with your hands off the wheel playing with your phone, if the car crashes into someone it’s still your fault under the current law; And that’s because the law at the moment hasn’t changed. It might change in the future, but at the moment most of the legal system deals with AI the way it deals with anything else by trying to allocate risk and allocate loss.

Wan Wei: How do you think the law might change in the future?

Professor Simon Chesterman: What’s possible is, in terms of criminal law, it might eventually get to the point where it’s actually unfair and unrealistic to have a notional driver. In the United Kingdom, a law commission report is proposing a different notion of the person in charge of vehicle; if they’re holding the wheel, they are the driver, but when they’re just in charge they are the user-in-charge.

You might move to a situation where you have different notions of responsibility. In the Society of Automotive Engineering, the SAE levels 3, 4 and 5, as you get more and more autonomous vehicles eventually, you might get to the point where it’s not the notional driver (person behind the wheel) but the manufacturer or the seller or potentially the owner who is responsible for the damage, if there is damage that’s caused.

That’s some way off because at the moment no legal system is allowing fully autonomous vehicles on the open roads but that’s not more than a couple of years away.

Wan Wei: As the Dean of NUS Law School, what do you think lawyers can do to prepare for a legal landscape that involves Artificial Intelligence systems?

Professor Simon Chesterman: I’m old enough that when I was a summer clerk, during law school, I spent time at a law firm and I remember seeing this incident with two associates. They were being paid a couple of hundred dollars an hour, and one was reading one document out loud while the other checked a copy of that document to be sure it was a true copy.

Even then, I thought that this was a complete waste of time. Today, no one would waste time doing that. We would scan the document and compare versions.

In a microcosm, what lawyers need to do is they need to adapt and look ahead at how technology is going to introduce efficiencies in their work and how it’s going to sharpen the areas in which they need to deploy their skills. What we do at the National University of Singapore is not to try and teach people how to use technology today. They do that mostly on their own. In the same way, we just don’t teach what the lawyer’s role is today. What we are trying to do is give them skills so that, ten years from now, they can work out what the law is through legal research, but also skills that they can take advantage of technologies that are around.

For law firms, what we are seeing is a possible shift in the business model. Law firms are being pushed by globalisation like everyone else, but also technology. A lot of law is not rocket science and so, basic research, basic compliance, that kind of box-ticking role of lawyers — I think a lot of that will be outsourced to machines. What would remain, however, is that humans and the machines that we are working with will still have disputes and, will still need clever people to help you think through a dispute, the kind of strategies that can be taken to avoid a dispute, or possibly how to resolve a dispute. I think it will be a long time before the machines take all of that.

Wan Wei: Do you think that there are scenarios in which AI will not benefit the field of Law or society?

Professor Simon Chesterman: There are two things I worry about with AI.

One is that it will reduce human-human interaction. If an AI system is employed to optimise things, often that assumes that the purpose of life is optimization. But optimization is a pretty narrow metrics — life has to be about more than that. Life has to be about optimising relationships, friendships and fulfilment, all that kind of idealistic stuff. That’s one way I do worry about AI taking over many decisions in society.

The second more concrete worry is autonomous weapon systems. I think there is a very good argument that autonomous vehicles would save lives. At the moment, about a million people die every year in traffic accidents around the world. Autonomous vehicles are not yet safe for humans but it seems they will be. Autonomous weapon systems are much more dangerous. Autonomous weapon systems, quite apart from the danger of malfunctioning, will further lower the cost of war. We’ve seen that with drone strikes making it easier for countries like the United States to project the power abroad without the risk of losing their soldiers.

I do worry that AI autonomous weapon systems will lower this cost for countries but also non-state actors, potentially terrorist groups, to engage in violence. So that’s the concrete and immediate worry about what AI can do.

Wan Wei: Thank you for your time and your insights. On a parting note, do you have anything else to add?

Professor Simon Chesterman: I think it’s quite common for technology and law to be seen in conflict — and in many ways, that’s as it should be. Law should not be leading technology, but law should not be unnecessarily constraining it either.

What I think is happening in AI is that we see at the moment a real tension — as for example, autonomous vehicles are pushing the limits of law as algorithm decision making pushes the limits of the law — and that’s fine. Law should encourage us to think carefully about the consequences, think carefully about who bears the risk, who bears the harm, who bears the loss; but it shouldn’t constrain innovation.

I think one of the challenges for lawyers and for people involved in the technology space is to make sure that law doesn’t unnecessarily constrain technology and give up opportunities. But we must also be careful that technology doesn’t race so far ahead of the law that we end up regretting that we weren’t a little bit more cautious in turning machines on in the first place.

Wan Wei:  Thank you Professor Chesterman.

Leave a Reply