Bulletin No. 1, 2021

8 CHINESE UNIVERSITY BULLETIN NO.1, 2021 Prof. Wilson Wong Director of the Data Science and Policy Studies programme cases there are. All it needs is a good amount of training, through which it can learn the rules and exceptions from samples we provide it with. ‘Efficiency and personalization are what usually motivate the use of AI in public administration,’ said Prof. Wilson Wong , director of the Data Science and Policy Studies programme. Aside from unlocking the wealth of data around us, an intelligent automated system can help respond to the many different needs of citizens around the clock. In Japan, for example, chatbots have been employed to give more individualized and accurate information on government services. From e-government to e-governance, public goods provision to policymaking— there is much potential for AI to do social good as many have called for lately. BUT AI, TOO, HAS ITS LIMITATIONS. For one part of her dissertation, Shuli endedupditching theAImodel andwent with the classical statistical approach, having compared how they performed in discerning the sentiments in the Weibo stories. It could be that the model needed more data for training, Professor He suspects, but there might be no way of knowing what went wrong. Indeed, many AI models are what computer scientists call black boxes, which is to say they have such an opaque decision-making mechanism that it is virtually impossible to diagnose the errors they make. At any rate, AI has not had much of an edge over traditional methods to begin with in terms of understanding emotions. ‘The model can categorize a sentiment as positive or negative pretty decently, but it doesn’t tell you how positive or negative it is,’ said Shuli of her experience performing sentiment analysis using AI. ‘And if you ask the model to be more specific and return anything more descriptive than a label that says “positive”, “neutral” or “negative”, you’ll probably get something wildly inaccurate.’ Things get even muddier when you are dealing with circumlocutions, like the sarcasm in our opening tweet. Solutions have been proposed to give AI models an awareness of contexts, as Professor He noted, but for now, machines are often still dependent on human calibration when it runs into this kind of problem. Beyond the understanding of social media parlance, this lack of tacit knowledge is a major reason why AI is not playing a more decisive role in public administration. ‘There are many misconceptions of what AI can do,’ said Professor Wong, who has been part of an Association of Pacific Rim Universities (APRU) project exploring AI’s capacity for social betterment. ‘With less controversial stuff like renewing driver’s licenses and handing out consumption vouchers, which are really just matters of verifying the applicant’s eligibility, surely AI can be of help. But how much further can it go?’ Consider university admissions. On top of academic results, the board will look for certain personal qualities: being principled, willingness to communicate, honesty, and so on. They are not exactly subjective, but they are hard to define, understood only through socialization. If we are to replace human admission officers with machines, the challenge will be for them to understand these qualities in mathematical and logical terms. How are we to create an algorithm for that? ‘The same goes for court trials. It’s hard to imagine a formula by which a machine can determine if the defendant is remorseful, however fair it might be to have a robot to be the judge.’ And here AI hits another roadblock: it is rarely even impartial. We have seen that the rules by which an AI model makes judgments stem from the samples we chose to train it on. If the samples are biased—as they To be data- literate, ultimately, is to have the knowledge to use data in a way that improves your life while not being enslaved by technology. AI: SOCIETY

RkJQdWJsaXNoZXIy NDE2NjYz