CSW asked a cross-section of officials to play around with ChatGPT and establish how good it is at some of the everyday tasks they perform in their jobs. Here, Greg from the Department for Business and Trade asks for help designing net-zero policy
I was interested in testing whether ChatGPT could be used to support policy development, as a tool to help civil servants draft advice on key policy issues.
As a starting point, I was curious to see what it made of some of the Spring Budget announcements (although this isn’t my area of expertise), and whether it had captured the key arguments relating to these. I asked it about the pros and cons of the chancellor’s “rabbit out of the hat” budget announcement: the abolition of the Lifetime Allowance (LTA) for pensions. ChatGPT identified some of the key advantages and disadvantages of the measure in simplistic terms, and correctly identified that the measure would “help high earners” (which the shadow chancellor might well regard as an understatement).
However, it didn’t seem to grasp the rationale: the argument that the LTA created a perverse incentive for higher earners in senior roles – including NHS staff and a high proportion of the over-50s – to retire early rather than to continue working and be subject to a punitive tax charge. It didn’t make any link to the other, related, budget announcements on pensions either. Crucially, ChatGPT seemed to be talking about the policy in hypothetical terms and didn’t seem to be aware that the LTA had been repealed at the Spring Budget, which is important context and would be a serious factual omission if an official were relying on this information.
"It tended to present complex issues in narrow, simplistic terms, missing important links to related issues or broader questions"
I was also curious to see whether ChatGPT could help design policy to tackle one of the key strategic challenges facing the country: climate change. As someone totally unfamiliar with net-zero policy, I asked the broad question of what the UK could do to meet its net-zero targets. The answer seemed sensible (if rather superficial) and provided useful background to the topic.
I decided to explore a couple of areas in more detail to see if ChatGPT could provide some more specific background information to include in my advice: if I were writing advice on a policy in this area, I’d probably want to know more about current UK climate policies. I asked what the UK was doing to invest in carbon capture and storage. The answer seemed to provide useful background, with some key figures and other factual information (including the recent budget announcement of £20bn of support).
I would also be keen to find out about international approaches, so I asked ChatGPT what measures the US was adopting to support net zero: again, a slightly limited but informative summary of some of the steps the Biden administration has been taking to tackle climate change.
Overall, ChatGPT performed best when tasked with basic fact-finding on clearly defined topics, although I wouldn’t currently be confident using it in a work context due to the number of major factual inaccuracies and omissions there were. It also tended to present complex issues in narrow, simplistic terms, missing important links to related issues or broader questions. I could certainly see this form of AI having some limited applicability as a research tool in the future, but I think we’re a long way from seeing the machinery of government supplanted by actual machines!