Why meeting the cultural and ethical challenges of AI head-on is so critical for government

SAS considers the implications of deploying AI before embarking on implementation


By SAS

12 Nov 2018

Speak to almost any member of the public about Artificial Intelligence (AI) and you’ll likely be met either with a furrowed brow of concerned cynicism, or a wide-eyed smile of enthusiasm. AI seems to have this kind of highly polarising effect, certainly on citizens and many organisations who are still, understandably, evaluating how AI will impact the future of humanity. 

I believe much of this has to do with misperceptions about AI’s capabilities, and its role in society and the economy. What this so often boils down to is a matter of culture and ethics. Culture in terms of how AI can be successfully deployed in the workplace for the mutual benefit of the organisation and employees. Ethical in the way in which the algorithms that power AI are built, how they learn, and how decision-making can be achieved fairly and with as little innate bias as possible.        

These issues, and the measures you can take to address the cultural shift and ethical use of AI are further discussed in an interesting paper by SAS, entitled: Artificial intelligence: the science of practical ethics.

What’s the big deal with AI ethics and culture?
 
1. Understanding the possibilities 
Much of the mainstream media is still debating the positive impacts of AI on the future of work, the economy, the environment and society at large. What does all this ‘opinion’ mean for the public sector? That there is a significant public relations job to be undertaken if citizens and organisations are to come willingly on this cultural shift regarding the role of AI in public life. Significant buy-in must be obtained if citizens are to understand and embrace the real value of AI in their relationship with government bodies.    
 
2. New employment possibilities 
For obvious reasons this is one of the biggest concerns for public sector employees. AI must not be rolled out as a simple cost-cutting measure, but as a way to generate more value. While AI will doubtless enable government departments to automate many laborious tasks and processes, it will also free humans for far more interesting work that uses their innate creativity, empathy and organisational skills, that machines are no match for. Once again, this cultural shift in the nature of jobs must be set out if employees and the citizens and organisations that must interact with government are to ‘come happily along on the AI journey’. 
 
3. Trust in data use
In their lives as consumers, citizens understand the nature of the value exchange between themselves and businesses when they yield their personal data. Whether they implicitly trust that their data will always be kept one hundred percent secure and used in ethical ways is uncertain. However, the power balance is with them. The relationship between citizens and government organisations is slightly different and according to research by the Information Commissioner’s Office, 51% of Britons do not trust national government departments to store their personal data.1  It is therefore unlikely that they have any greater trust in the way that data will be used. This challenge is exacerbated by stories of AI decisions being biased against certain demographic and ethnic groups. 
 
4. AI control 
Where are we going with AI and who will control its ethical development? This is a natural question for any member of society, even those involved at the cutting edge of experimentation. While there is no single roadmap, it’s important for government organisations to remember that this question is often seated in a fear of the unknown. And that fear is being fanned by stories such as that of Sofia, the humanoid robot granted citizen status in Saudi Arabia, alongside a legitimate need to include ethical frameworks the guide how AIs are treated by developers and those who run them.    
 

How can the public sector address these challenges? 

While the issues around ethics and cultural change are serious and complex, the good news is that many experts in the field of AI are already working on frameworks to help the public sector ensure that AI can be developed ethically, and that data and algorithms can be deployed with as little bias as possible. 

Some of these ideas operate at a more philosophical level than others, such as those proposed by The House of Lords and the Future of Life Institute. SAS, as experts in the field of AI for over 40 years – specifically in machine learning – have created a set of practical considerations around ethics and culture for public sector organisations. What does become apparent is that many of the challenges around data management and governance, and algorithmic bias can be managed very effectively within the analytics platform that powers AI and gives it real intelligence.   

You will find these different frameworks and considerations discussed in the paper called ‘Artificial Intelligence: the science of practical ethics’ which you can find here, at the SAS AI for public sector site. 

The Information Commissioner’s Office, Information Rights Strategic Plan: Trust and Confidence, August 2018  https://ico.org.uk/media/about-the-ico/documents/2259732/annual-track-2018.pdf  

Read the most recent articles written by SAS - Can ‘data driven’ be learned, or is it ‘in the DNA’?

Share this page