The narrative surrounding artificial intelligence in the workplace has become remarkably consistent across boardrooms and government. Senior leaders in the public and private sectors speak of revolutionary efficiency gains, transformative digital strategies, and the dawn of a new era in productivity. Yet, beneath this perspective lies a more complex risk that threatens to undermine the very transformation it promises to deliver.
Recent research from Indeed, encompassing 1,944 employers, 898 senior managers, and 570 HR decision-makers, reveals what Matt Burney, senior talent strategy advisor at Indeed, describes as a "pretty shocking" disconnect between executive vision and employee experience. The study, conducted last year, exposes a fundamental gap that could derail AI implementation across both public and private sectors.
The statistics tell a stark organisational misalignment: while 60% of senior managers report feeling supported in their roles, only two in five employees share this sentiment. This disconnect represents a fundamental failure to align the pace of technological change with human capacity to adapt.
The research also reveals that workloads have increased across organisations while work-life balance has deteriorated. More tellingly, employee concerns about AI don't centre on traditional job replacement fears, but on something more troubling: the worry that roles aren't being enriched by AI but rather "streamlined a bit out of existence."
The deskilling dilemma
This disconnection manifests in what Burney describes as "rust out", borrowing terminology from presenter Rebecca Levitt to describe a more pervasive form of workplace deterioration than traditional burnout. Rather than the dramatic collapse associated with burnout, rust out represents "that slow erosion of trust and loyalty and respect, and it just winds up with people who are very, very disengaged."
The fear of deskilling takes on particular urgency when considered alongside research from the Institute for Public Policy Research, which warned that eight million jobs could be at risk from AI in a “full displacement” scenario.. However, Burney argues these aren't inevitable casualties of progress but rather "the warning light on the dashboard" – a signal that organisations must act decisively to reskill and upskill their workforce.
Government data reveals that across every industry, approximately 10 to 10.6 hours per week are spent on manual tasks that could be automated. In talent acquisition specifically, this figure rises to 14 to 15 hours weekly. "If we all know that in everybody's job, there are 10 hours of wastage, we need to identify that wastage really quickly," Burney explains. "It doesn't mean that your job goes part-time. It means that we probably didn't hire you to do that. We hired you to do this thing over here."
Communication is key
Central to the disconnect between leadership vision and employee reality is a fundamental failure of communication. Leaders believe they've adequately communicated their AI strategies through town halls and email campaigns, but employees remain unclear about what these strategies mean for their individual roles.
The problem extends beyond the message to understanding and buy-in. Burney illustrates this with an anecdote about a friend who received AI-generated training materials about new AI tools.
"She read it and didn't understand a single word of what was sent to her," he recounts. When she tried to use the company's internal AI tool to explain the AI-generated content, it represented "the worst of every outcome humanly possible." This example highlights the possible dangers of using technology to communicate about technology without human oversight and explanation. Organisations may create recursive loops of confusion that could alienate the very people they need to embrace change.
The current approach to AI implementation often focuses on automating routine tasks – what Burney calls "the drudgery piece." While this creates efficiency gains, it may generate uncertainty about job security and career progression. More importantly, it represents a limited vision of AI's potential.
"Should you not also be looking at the very gratifying, strategic, in-depth stuff that takes time and effort?" Burney asks. "You should be looking at your automation tools to go and be better at that as well."
This shift in perspective could transform AI from a threat to job security into a catalyst for more meaningful work. Instead of simply removing tasks, organisations should consider how AI can enhance strategic thinking, creative problem-solving, and complex decision-making.
Addressing these challenges requires, as Burney puts it, a "skills-first approach" – moving beyond traditional job categories to focus on capabilities and development potential. This means organisations must first understand their current skills inventory before implementing automation.
"If we're not clear on what skills we've got in our business already, how do we know what skills we need?" he asks. For employers, this means quickly identifying desired skills and investing in development opportunities. For employees, it requires proactive responsibility for skill development and honest assessment of capabilities and gaps.
The communication challenges also extend to external recruitment practices. While many EU organisations are already well into ramping up towards the upcoming obligations under EU AI rules on transparency and disclosure of AI in recruitment. UK-based organisations should look to front-load and pre-empt some of these rules by adopting best practices where appropriate.
This transparency, Burney argues, isn't just about ethics but also about building trust and understanding. When candidates understand how AI is being used in the recruitment process, they can better prepare and engage with it, potentially leading to better outcomes for both parties.
A unique opportunity
While private sector businesses have rushed to implement AI solutions, often creating the very disconnection that Indeed's research identifies, the public sector's traditionally slower pace could be advantageous. "The public sector has moved slower but has an opportunity to be more strategic," Burney observes, noting this strategic advantage comes with unique responsibilities.
“Public sector transformation depends on both trust and confidence. If you've got people exhausted, excluded from decisions, they won't trust the process that's going to spill over into service delivery," he explains, noting that the public sector has an opportunity to lead by example, demonstrating how technology can improve outcomes while maintaining the human elements that citizens value in public services.
Success in AI implementation requires new metrics beyond efficiency gains and cost savings. Burney calls on organisations to consider engagement levels, skill development, and employee confidence in using new technologies. This broader view helps avoid optimising for the wrong outcomes – appearing efficient while undermining long-term capability and morale.
The future of work will be characterised not by human versus machine competition, but by human-machine collaboration. "AI is going to be kind of the amplifier, not the replacement," Burney says. However, achieving this vision requires organisations to bridge the gap between leadership hype and workplace reality.
This means being explicit about implementation intentions, honest about potential impacts, and genuine in their investment in human development. The stakes are particularly high in the public sector, where the success of digital transformation depends on maintaining citizen trust while improving service delivery.
Visit Indeed's Public Sector Talent Hub to learn more about how you can level up your workforce strategy.
The information in this article is provided as a courtesy and for informational purposes only. Indeed is not a legal advisor.