The AI opportunity: What we can learn from previous revolutions to fully realise the potential of AI

The AI revolution isn’t our first rodeo, says Amanda Kelly, a public services expert at PA Consulting. So, how can we learn the lessons of the past to ensure a successful fourth industrial revolution that drives innovation and better public services in a democratic and equitable way?
Image courtesy of PA Consulting/Getty Image

Almost every contemporary conversation about innovation is a talk about technology. Right now, AI is bringing massive disruption and, as I see it, incredibly positive possibilities. Having personally been through successful but traumatic brain surgery in the past, who could fail to be impressed by the fact that AI could make this even safer and more effective?

Yet away from individual success stories or promises, my focus is on how the public sector can help make AI’s possibilities more real, more regularly – getting from ideas to realities. This is a three-fold opportunity: to ensure the UK is a world leader in AI; to drive real public sector benefits from the technology; and to better address the risks and challenges to ensure equitable outcomes for all. For instance, AI with reduced algorithmic biases, with greater input from marginalised communities and voices, and where innovative ideas are assessed on strength alone – not where they’ve originated from.

AI’s rising tide can lift all ships

It’s an uncertain environment. The barriers to AI entry are low, ethical considerations are tricky, and people are either aware or afraid their jobs will disappear – such as in the 148-day Hollywood writers’ strike. We know this level of change brings winners and losers, but this also isn’t our first rodeo. There’s a huge body of research on previous technology revolutions, so we can make sure the process and the outcome are more democratic this time. And we can tell the story better: remind people a rising tide lifts all ships. For instance, I see organisations where apprentices now do the coding once done by graduates. The graduates take on new responsibilities – as will those apprentices in time.  I remember one of the most hated tasks when I was a graduate was searching for invoices on a microfiche reader. I would have gladly handed on that role to a machine. So, yes, jobs will disappear, but there will be different, better jobs – and society will be better off in human and economic terms.

Our experience and research into scaling innovation suggests the AI transition will be more successful if leaders in this area, such as DSIT, define the vision and perceived benefits clearly, so people can be inspired by the potential that AI has to change lives for the better.

Set specific and ambitious goals

Establishing the AI Foundation Model Taskforce is a hugely positive step towards ensuring the safety and reliability of AI, and the Bletchley Declaration is a welcome commitment at an international level to agree the safe and responsible development of frontier AI. The next step to increasing the chances of AI-enabled innovation getting off the ground is for leaders to identify and define the problems they’re looking to solve, or the changes they want to bring about. An innovation mindset starts with asking what challenge we are trying to solve. So, leaders need to adopt a clear mission for the UK, in the same vein of Estonia’s commitment to be a digital society.

If the opportunity and ambition for the UK is to be the intellectual and academic brains behind AI advancement, then thinking must turn to what’s needed to achieve it. As AI lacks any boundaries, we can get ahead by shaping those global guardrails – as the UK has with online safety. This prompts questions relating to how we master AI, how we benefit from AI in government and industry, and how we ensure an equitable AI revolution. For instance, thinking about how Government can make decision-making, legislating, regulating, and operating easier and better. And then creating a critical mass of scientists, designers, and engineers able to lead the development of AI globally.

Then, deciding the role that AI can play in the day-to-day jobs of Government. Here, my colleagues and I have established a set of criteria to help choose the best innovations to advance. Is there a catalytic impact? Are we unlocking multiple benefits? Does this enable further positive change?

Finally, leaders will need to remain mindful of the disadvantages for some. They’ll need to think about what they are doing to mitigate or alleviate these, and how to create AI equity?

Involve the right people in the right way

Thinking about the ‘unsolvable’ problems AI could fix or the ideal scenario it could facilitate means understanding what end-users need or want. This calls for leaders to ensure people on the frontlines have their say, contribute to decisions on the innovations to pursue, and play a part in development. This includes forging connections between and within the public sector.

Appeals for ‘inclusion’ are well-intentioned but often fail to drive real change. That’s why I advocate for radical inclusion: where you involve legal, cyber security, and social scientist colleagues in AI development. Bring in the rebels, the wildcards, and those you wouldn’t typically interact with. It’s the best way to create a framework for frictionless, safe, and equitable exploitation of public data.
There’s also a temptation to compartmentalise AI from other technologies, but the technology landscape is layered. The interdependencies between AI and biotech and quantum, for example, mean collaboration with experts in various capabilities will be vital.

Enable sustained transformation

Leaders need to make the most of routes that pull innovation through – be that partnerships, design collectives, or advance market commitments, such as that used when driving development of COVID-19 vaccines. The UK Government has a history of being an enthusiastic investor but a reluctant customer. This needs to change. And the good news is the updated Government procurement legislation will support that. However, as far as we can tell right now, there’s nowhere businesses can go to get Government advice or support on emerging technologies. Clients tell us advice and support is urgently needed.

While it’s evident that the existing workforce will need to be appropriately trained for AI to scale, thinking needs to go one step further. For instance, what’s needed in schools to ensure the next generation is equipped for the jobs of the future? We know what some but not all of those skills will be. And we can already see there might be hybrid roles. For example, a GP could work with an AI co-pilot (which would also mean new type of training that could make it easier for people to become clinicians).

Realising the opportunity

I don’t underestimate the challenge. When it comes to AI, things are moving fast. In fact, the world is already playing catch-up. And it’s also an area that often provokes a binary reaction: as all good, or all bad. This type of reaction makes it all the more important for knowledgeable parties to steer a nuanced and informed debate on the topic.

The road ahead may be bumpy but applying the principles I’ve described will enable public sector leaders to drive continuing progress across both their own teams and the economy. It will allow leaders to step ahead of reactionary responses and look ahead to anticipatory issues as a matter of habit. And it will allow ideas to get beyond the pilot and ensure AI can have the best impact on society – and growth.

ABOUT THE AUTHOR

 

Headshot of Amanda Kelly
 

Amanda Kelly, Public Services Expert at PA Consulting, has led complex change and transformation programmes across the public sector, including central government, local government and the NHS, delivering sustainable improvements.

Read the most recent articles written by PA Consulting - The overlooked magic ingredient behind every successful innovation

Share this page