The UK government’s AI Safety Summit is taking place on 1-2 November. What is frontier AI, what are the risks, and how are they already being managed by researchers? Samuel Kaski, professor at the University of Manchester and Aalto University in Finland, can offer expert input to these questions and more. He is also speaking at the Summit’s AI Fringe in Manchester on 31 October. Professor Kaski is a leading machine learning researcher, has provided media commentary for example to the BBC, and directs the Finnish Centre for Artificial Intelligence FCAI.

“Frontier AI is another name for foundation models, AI systems that have been trained on large amounts of data at scale,” explains Kaski. These systems have familiar applications as chatbots or image and text synthesis programmes. AI is now a general-purpose technology, like electricity, says Kaski, with huge potential societal benefits when it is understood and deployed responsibly. However, regulation is not the only way to make AI safe.

“In AI research, we strive for technologies that have built-in ways to mitigate adverse effects, such as through robust machine learning. If we impose regulation based on current limited knowledge, we will be ruling out many of the promising uses of AI like improving healthcare and contributing to sustainability.”

“As important as this Summit’s agenda of discussing AI risks is, it’s equally important to think of the good. Regulation and constraints can stop threats and are needed, but stalling development is not the right strategy. Currently, I don’t think anyone has an answer to the question of which kind of safeguards are necessary and sufficient and when they would be ready for deployment,” Kaski stresses. He calls for expanded public research through universities in the interest of society, rather than only relying on corporations.

Samuel Kaski is available for interview starting October 26, 2023, in the UK or virtually.

Contact details: samuel dot kaski AT aalto dot fi