Jobs
AI and jobs – the critical decision that could mean success or economic disaster
With Artificial Intelligence now a strategic imperative for many organizations, there are rising concerns over AI’s potential impact on employment, recruitment, skills, and – by extension – the education system’s readiness for this new world.
So, will AI decimate the jobs market? Or will it do the opposite and create new jobs, services, and companies, upskilling workers rather than pushing them towards the exit? These were among the questions for a recent Westminster policy eForum on AI and employment.
Giving the keynote was Professor Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, and Co-Chair of the Special Advisory Committee at the Trades Union Congress (TUC), the UK’s union federation. In both roles, she looks at the real-world impact of technologies on society, and has been a key player in proposing legislation to protect workers in an AI-infused world.
In April, the TUC launched the draft Artificial Intelligence (Regulation and Employment Rights) Bill, a ‘ready to go’ solution, with the help of the Minderoo Centre and the Special Advisory Committee.
The Bill is not the same as the UK Government’s proposed legislation on the safe development of AI, which is being debated in the British Parliament this month, in advance of the second AI Safety Summit, which will be co-hosted by the UK and South Korea in Seoul on 21-22 May.
So, what were the main factors driving the TUC’s Bill, which seeks to protect both workers and jobseekers? Giving the keynote, Neff set out a pragmatic vision of AI’s rapidly developing role:
There are three key things that we think about with the challenges of AI and employment. The first are what we might term ‘algorithmic management systems’ – automated systems within the workplace that manage, that make decisions about schedules, and choose who will be working and how they will be working. These systems are already being used in performance management, and in hiring and firing.
The second is how people are integrating these tools into their workflows. How are companies changing? And how is the work at hand changing? This is much of what we’re already seeing professional workers talk about in terms of how they approach using tools like Co-pilot in their everyday jobs. And that, of course, leads us [third] to thinking about efficiencies and job replacement.
On that point, Neff’s prognosis is:
The best economic estimates that we have from the OECD, from the International Labour Organization, and others, suggest that a net job gain, or neutral job gain, are extremely likely outcomes for this wave of AI development.
This echoes findings of the World Economic Forum (WEF) on Industry 4.0 technologies in general, with the WEF also suggesting that, without inclusive AI development – AI that protects the interests of humans – sustainable development will be impossible.
But of course, none of this means that millions of jobs won’t be lost or changed dramatically. It merely suggests that new jobs, services, and companies may appear to balance those losses – as has happened with cloud, apps, mobility, e-commerce, and more. Professor Neff added:
The headlines say that AI is taking all the jobs, but that is not a given. The IPPR [think tank the Institute for Public Policy Research] has just released a study on what AI could cost in terms of UK jobs overall. The high number was eight million, and the low number was zero, or slightly net positive. So, the jury is still out in terms of job replacement.
What’s going on?
Eight million jobs would be one-quarter of the current UK workforce, so a cynic might observe that a spectrum that stretches from AI taking 25% of all jobs to a marginal net-gain is tantamount to saying, “Nobody knows what the future holds”. But is that fair?
Not entirely. The IPPR’s March 2024 report presented a more complex scenario than Neff’s summary implied – one that puts the onus on bosses to pick one of two options that may decide the UK’s economic future.
The think tank looked specifically at generative AI and its impact on knowledge workers. It warned that 11% of such tasks are already ‘exposed’ to the technology – barely 18 months into the ‘AI Spring’ – and that this figure could increase to 55% if AI becomes more deeply embedded in organizational processes.
In these early, experimental phases of deployment, the IPPR said that administrators, marketers, authors, copywriters, translators, and medium- to low-earning support workers will be most affected – a scenario that, in the think tank’s view, will disproportionately affect women.
However, if and when AI becomes more deeply integrated into organizations, then professional jobs in areas such as finance, taxation, IT, design, and middle-management would be affected. Ultimately, the report suggests that – if society accepts the change – teachers, doctors, hospitality workers, and others, would also be deeply impacted, as more and more work is built around AI.
In other words, an AI-first work environment would be very different to a humans-first culture, and that needs careful management and consideration.
At this point it is worth reminding ourselves of the promise that is constantly made by Industry 4.0 tech vendors: that their tools will sweep away boring tasks and free up workers to be creative. As previously reported (see diginomica, passim) much generative AI deployment is doing the exact opposite and automating creativity itself, and devaluing it in financial terms.
By that crunch point in AI deployment, public and organizational policy would be critical, says the think tank. In the IPPR’s view, an AI-augmented economy could spur economic growth of 13% (2023 UK growth was 0.1%, according to the Office for National Statistics), while a “full displacement” outcome could see up to eight million jobs lost with zero economic gain.
In other words, the future is squarely in business and political leaders’ purview to map out. Adopt AI strategically to make your human workers smarter, and the economy wins. But use AI tactically to slash costs and replace humans with machines, and the net result would be zero gain for millions of jobs lost.
To any veteran tech industry observer, this is both the critical issue and the point at which it is hard to feel optimistic. This century, survey after survey of incoming technologies that promise to augment human skills have typically found that business leaders’ priorities are to slash costs and do more with less, rather than make their organizations smarter.
Exceptions and rules
Will AI be the exception? To which the answer is: why would it be? One way to ensure the most beneficial and productive outcome would be to ringfence many jobs to ensure that humans remain in the loop, says the IPPR. But whether that would nurture high-quality jobs for the long term or relegate workers to low-level support tasks is an open question.
There is a newspaper that is already using AI to generate its news stories – the very core of a long-established business – and employing ‘reporters’ to make their output sound more human. A strategy that, in light of the IPPR’s report, seems witless and short sighted. Where is the investment in people, in training the next generation of experienced professionals?
In services-based economies, consider too the percentage of the population that currently works in call centers – in many cases, in former industrial heartlands where other work is scarce. Such jobs are typically insecure, stressful, repetitive, exploitative, target-driven, and have high churn rates. But with AI and chatbots sweeping in, how many of those jobs will even exist by 2030? And what will they be replaced with?
And as noted last month, digital poverty leads to real poverty, and vice versa, and is a far more widespread and damaging problem than digital exclusion. So, the danger of AI reinforcing the divisions between digital-haves and have-nots is significant.
At the Westminster eForum, Professor Neff observed that other aspects of the AI age make putting humans first an urgent objective:
Tackling the kind of bias that is built on the data that’s training and building these systems is paramount for making sure that we have good, positive outcomes.
The second issue is trust. We face a broad set of concerns around how we are going to have trust in society. How are we going to ensure that our technologies are building and reflecting positive trust, not only on social democratic levels, but also within industrial relations?
And third, another challenge will be around efficiency and efficacy. The point and hope for AI-powered systems is that they will make our lives easier, that they offer the promise of jumpstarting productivity.
But if that’s not working, then where is the burden of these systems going to fall? Will it, disproportionately, be on some people more than others? Are these technologies truly representing an efficiency gain, or perhaps simply a market gain?”
Excellent questions, with the subtext being where the needle points on strategy versus tactics.
Humans
Neff then touched on other risks, such as the loss of human agency in decision-making, and the ability of AI to monitor workers and gather data about them – and, implicitly, perhaps begin predicting their behavior:
There are uses of technology that we would want, as a society, to see redlined. We would want to make sure that these technologies are used for the benefit of employees and employers. For example, data collected by an employer being used by a health insurance or life insurance company would feel like an overreach.
All of these issues and more would be covered by the draft Artificial Intelligence (Regulation and Employment Rights) Bill. So, Neff emerges as a key figure in mapping out the future – if the Bill stands any chance of being adopted and succeeding, of course.
So, does Neff have any concerns about the Bill’s effectiveness in ensuring a fair and balanced social contract between employers and workers? She said:
This is something for everyone to have a conversation about. With the absence of some of the definitions and guardrails that are proposed in this Bill, you would have a new set of imbalances in that relationship. Employers would be the ones unilaterally to decide on the use of these systems. The right to query, audit, examine, or review the decisions of such systems would not be guaranteed for the people who work under them.
Some of the questions that I, as a social scientist of tech, have about these hiring and management systems, in particular, are about to what extent are they meeting the legal requirements for unbiased and fair treatment of employees under existing regulations – on equality, for example.
So, with this legislation we are saying, ‘Let’s ensure that there are more eyes on that question, not fewer’.
My take
Good stuff, and I commend Professor Neff for her principled stance.
However, another issue is also critical – do organizations have the skills to seize AI’s transformative potential in the first place, to ensure that we reach the best and most strategic outcomes?
On that question, the answer from survey after survey has been ‘no’ [see diginomica, passim] – or at least, not at the moment. A majority of organizations lack the requisite skills, not just in technical terms, but also in areas that have compliance implications, such as security, privacy, copyright, and data protection.
Skills will be the real battleground, therefore. Not just among future workers – where the ability to think critically, move sideways, and be excellent communicators will be at a premium – but also among employers.
So, for business leaders the mantra must be: Don’t be dumb. Don’t see AI as a silver bullet that will solve all of your organization’s problems. Find out what your problems are first, then deploy AI to help humans solve them.
The full text of the proposed Bill is here: https://www.tuc.org.uk/research-analysis/reports/artificial-intelligence-regulation-and-employment-rights-bill