Bulletin No. 1, 2021

THE NEW GOSPEL ACCORDING TO A.I. 27 ‘If the machines keep getting better and they become way more reliable than we are at certain tasks, then the users might be justified not to feel responsible for the machines’ judgements. It would actually be more reasonable not to step in,’ said Professor Erler. But for now, wewill have to take care not to put all our trust in it, not least because many AI systems are black boxes as we have seen. The fact that we do not always know the rationale behind a machine’s decisions leads to an important ethical question: is it at all responsible to use something we do not fully understand? Professor Erler said it may be justified if the machine consistently delivers good results and the outcome is the only thing that matters like in a game of chess, where all we care is probably to have a worthy opponent. In cases where procedural justice is crucial, though, such as using AI to predict recidivism, we will have to do better than blindly following the machine’s advice. ‘What you’d ideally expect is a list of reasons from the machine. That doesn’t mean you have to know all the technical details behind it, and you might not be able to. As long as it gives you a justification, you can go on to evaluate it and decide if it’s any good,’ said Professor Erler, hinting again at humans’ irreplaceable role in the age of AI. Another weakness of AI we must bear in mind is the bias it inherits from the data we use to train it on as discussed earlier in our story. We have seen that a machine feeding on incomplete knowledge can lead to real, lived societal harm aside from producing uninspired art. Worst of all, the bias may go unnoticed with machines exuding a veneer of neutrality. Speaking of the ethics of autonomous vehicles, Professor Erler brought up a particularly chilling example of such harm. ‘There’ve been surveys on how the moral principles guiding a self-driving car may differ around the world. Some societies seem to think that people of higher social status are more important morally and deserve more protection. Do we want our cars to act upon these sorts of beliefs?’ For as long as machines continue to work under our influence, we will need to address their bias as responsible developers and users. A big and diverse data set for them to feed on will of course be imperative, but people other than those involved in building and training them— ordinary citizens like ourselves—also have a role. ‘One thing we could do is to report instances of bias when we think we’ve encountered them, and that can contribute to a conversation,’ said Professor Erler, citing the automated recruiting system that carried forward the gender disparity at Amazon and rejected applications from women. ‘In some cases, there may be no bias after all and some groups do fit better in certain areas, but in others the bias is real. If there’s a discussion and an awareness of the problem, there will be an incentive for it to be rectified like in Amazon’s case, where they stopped using the system after becoming aware of the issue.’ ‘I LOVE SCIENCE FICTION,’ said Professor Mik, who is certainly not unfamiliar with the trope of robots becoming human and superhuman in all the films she has watched and rewatched. But when asked about the prospect of sentient machines being created in reality and how that might change our view on AI personhood, she was quick to brush the idea off. ‘When we’re that far, we’re going to have bigger problems than AI personhood. We’ll probably not be around. In any case, you’ll know what level of technological progress we’ve reached when you actually read the literature written by those involved in developing the technology.’ And as other AI researchers have pointed out, there is no reason why we would want to invest in making sentient machines when the very purpose of having machines in the first place is to make them serve us as we please. ‘What does it give you to create a robot that feels? It only gives you trouble. You can’t talk back to your Alexa anymore,’ said Professor Mik. Professor Erler agrees it is a remote prospect, but he suggested how some of these more speculative scenarios might be worth thinking about on a philosophical level as his former colleague at Oxford Dr. Toby Ord does in his book The Precipice . ‘The argument is that if we wait until AI reaches that level of development, it would probably be too late. We would no longer be able to place constraints on its design and prevent catastrophes.’ It is always interesting to get ahead of ourselves. In fact, it is important that we do—how else can we know beyond this narrow slice of existence we call the present? But the present has its own problems, pressing ones indeed. When it comes to our current day-to-day negotiations with AI, there is a broader truth to what Professor Mik said as a fan of sci-fi: ‘Keep sci-fi away from law.’

RkJQdWJsaXNoZXIy NDE2NjYz