Speeches

‘Regulating AI: The Art Of The Possible, The Attainable, The Next Best’: Transcript Of Speech By President Tharman Shanmugaratnam At The Asia Tech X Summit Opening Gala At Gardens By The Bay On 29 May 2024

29 May 2024

Mrs Josephine Teo, Minister for Communications and Information

Mr Mathias Cormann, Secretary-General of the OECD

Excellencies

Distinguished guests from around the world and in Singapore

Ladies and gentlemen

 

 

1. Our ambition in tech, and in public policy around tech, has to start with imagining the future we want to see, and especially the public good that we want to see.The ambition must involve pulling together the best minds, entrepreneurial energies and ideas from civil society and others to give us the best chance of achieving that public good. And it must involve finding a way which technologies - including those that are now at the frontier and are pushing it forward – will be tools for the public good, and catalysts for the whole set of social and economic changes that we need to get us to better societies and a safe international order.

 

2. We know that AI, particularly generative AI, is going to have profound consequences. It's still early days, still a lot of hype, but I think the way this plays out will bring profound change in all we do, and in the way our economies are organised.

 

3. And it's fair to say that the pace of advancement of AI and related technologies is far outstripping our public policy and regulatory responses, and even the thinking on public policy. That’s not incidental. It's because AI and the related technologies around it are moving very fast. So let me make a few broad observations and reflections.

 

4. First, we need both ambition and humility in defining our goals in regulation and how extensively we can regulate AI. We have to avoid the extremes in regulatory thinking. We can't leave it to the law of the jungle; we can't leave markets to the law of the jungle and most especially, we can't leave AI to the law of the jungle. We would otherwise be letting might be right. We would be letting whichever players emerge the largest in AI to dictate the norms that shape medical practice, social media, the way democracies function, even warfare. But that clearly shouldn’t happen. It mustn't happen.

 

5. We can't leave it to the law of the jungle. But what does getting regulation right mean? We cannot realistically aim to ensure that AI only delivers good. It is not realistically achievable, and if we wanted to avoid the risk of bad outcomes altogether, it would mean putting a stop to AI innovation. And that means putting a moratorium on all the potential good that AI can bring in medical science and discovery, and a whole range of other areas.

 

6. Getting regulation right must mean avoiding a search for perfection or holding innovation back until we have achieved the perfect solution where AI only gives us what is good for people and society.

 

7. To borrow Otto von Bismarck's adage on politics, regulating AI must be the art of the possible, the attainable, the art of the next best. We must go for the next best, which is to get the most good out of AI and avoid the worst. Seeking to get the most good out of AI – better jobs for a broad base of the workforce, better and earlier treatments for diseases, tackling climate change more effectively using AI tools. And seeking to avoid the worst of AI's harms to society and to global order.

 

8. We must also seek to minimise the risks and deal with unwanted outcomes. But we must know that we cannot avoid the risks altogether. For example, we must work with those whose jobs will be displaced to ensure they get new jobs and, where possible, better ones. We must educate people, starting from young, to deal with misinformation.

 

9. So that has to be our frame of mind. Try to get the most good and avoid the worst outcomes, but accept that there's going to be a certain amount of bad in the system. It's intrinsic to AI innovation. Seek to minimise harm but help people deal with the unwanted outcomes. The alternative is simply a complete moratorium.

 

10. My second point is that what we do now, just like with climate change, is critical. We cannot wait until after the fact to know the consequences of this new generation of AI - whether the good will outweigh the bad. We can't wait till we find out whether singularity has arrived, whether superintelligence has come upon us, where machines have the general intelligence to do better than humans in most tasks.

 

11. We therefore have to start shaping AI and the technologies around it now. Shape how it is developed and how it is used.

 

12. It is going to be difficult. Steering AI to deliver the most good, and to prevent the worst, together with climate change, are probably going to be the most complex and important challenges facing the global community, with the most profound consequences if we get them right or wrong.

 

13. We must get past the hype and despair over what generative AI will bring. Whether we get a world of plentiful and better jobs, a world safe for democracy, depends on what we do now, working collaboratively - between scientists and engineers, public policy makers, private corporations, labour leaders, and civil society - to shape technological evolution so it delivers the most good and we avoid the worst.

 

14. My third point is we have to think of AI governance ultimately as an enabler for innovation itself. If we just let live with anything goes, it's not going to be sustainable over the long term. Innovation will not be sustainable.

 

15. Governance is an enabler to ensure that technological advances and AI advances remain trusted, are accepted by societies, and is hence critical for innovation to be sustainable. Without human guidance and international norms around the use of AI, it will ultimately run aground. So think of governance as an enabler for sustained innovation, not as an encumbrance. It's not one versus the other.

 

16. My fourth point is that we can only achieve this through international cooperation and collaboration. And we must build broad coalitions for this, not only involving the leading countries where AI’s foundational models are being built, but smaller countries which are doing AI and related technological R&D, and which are using AI to transform their economies, like Singapore. We've got to pool resources, make the most of expertise from every source, collaborate to test large language models and to do the red teaming. And ultimately collaborate to form norms that are broadly accepted internationally, even if there are variations and differences. That has to be a very important project that occupies the policymakers amongst you especially.

 

17. My fifth and last point: we have to look for early wins, even with the complexity and enormity of the task. Get momentum around what we know is doable and attainable, move with great energy and collaborate internationally for early wins.

 

18. We know the benefits are going to be profound. In health care, for example, the ability to have much earlier and far better diagnoses, for example by identifying potential pathologies from medical images. The innovations are already happening – we see some of it in Singapore, for instance in identification of threatening eye conditions. And put effort into curing diseases which have no cure today, like cancer. These are applications of AI which people will see as a good, and are worth putting a lot of energy into.

 

19. Food security is another area. People will see the benefits - using AI to forecast risks of droughts and floods, to achieve higher agricultural yields by being able to detect fine-grained changes in soil conditions.

 

20. Using AI in education to help the broad span of learners and make possible mass personalised education. Like for learners who are stuck at grade three on maths problems while the rest of the class is moving ahead on, for example  -   helping them with new, interactive tools that get them through.

 

21. Moving on finance, where we know the risks are becoming more extensive - the risks of AI and other technologies creating a new generation of spams and cyber-attacks. But so too the solutions. The ability to use AI to create more sophisticated systems of fraud detection and detection of bad actors has to be a priority.

 

22. So these are areas where we can have early wins, and we have to work on these early wins to give confidence amongst our publics that we are on the right track.

 

23. Conversely, it also means that we address the challenges that AI brings, and start moving now to avoid the worst.

 

24. In jobs, for instance. The truth is, for all the thoughtful academic debate that's taking place, we cannot say if more jobs are going to be displaced than jobs that are going to be augmented with the use of AI and other tools. It could go either way. But it depends on what we do today. We have to avoid the risk of a large segment in the middle of our societies - hardworking people with middle skills - being displaced. It can happen, but we can avert it. We've got to double down on reskilling and development of new skills, giving people the confidence to move into new jobs. It's something Singapore is working very hard on. It can be done. Start shaping that outcome now - what happens to jobs, 10, 15, 20 years down the road depends on what you do now.

 

25. Energy consumption is another challenge, because it links up to climate change. And we do know that AI is regressing at a rate that breaks the long-term trends in energy use. So we have to couple AI ambitions with our net zero ambitions, and with our transitional goals for achieving net zero. Singapore is taking that very seriously.

 

26. The third risk in social trust and cohesion. Trust is already at much lower levels than it has been for decades in a whole range of societies and democracies. But AI can amplify today's toxic forces, the way information is disseminated and absorbed in society, and the way in which social distances between people grow and you even get a cementing of enmities. The worst could happen if we just simply leave it to the law of the jungle. The work on this problem globally is still its early days. We know there is a problem, but we do not yet have tools in the democracies around the world to be able to address the problem effectively, and to keep electoral democracies safe from disinformation.

 

27. And fourth, we have to avoid the worst happening in security: the risk of unsafe AI technologies embedded in weapons systems and in national surveillance systems going rogue. Every country, including every major power, has an interest in no other country having its AI systems go rogue, because it will trigger responses and you can get catastrophic results. Mutual security dictates that we cooperate to avoid the risk of AI systems going rogue.

 

28. Let me end by just saying that it's ultimately not about the technologies. Many of you would have watched the recent Netflix series ‘The Three Body Problem’. A remake of a science fiction story written in China. In ‘The Three Body Problem’, the heroes and the villains were not actually the technologies, even though the technology, the quantum mechanics, was central to the story. The heroes and the villains were the scientists, the engineers, the other humans whose actions would determine the course of humanity.

 

29. It's all of us - scientists, engineers, policy-makers, private corporations, labour leaders and unions, civil society - all of us, whose decisions, disagreements, and hopefully, growing affinity with each other because of our common interests, that will determine the course of humanity.

 

30. Thank you very much.

You may want to read about