Home » Knowledge Hub » HR AI » Navigating the Impact of AI on Work
Navigating the Impact of AI on Work
20 March 2024 HR AI

Story by
Patrick Brodie Partner, RPC

Patrick Brodie, Partner at international law firm RPC on the challenges, opportunities, and the human touch in the time of AI.
Our fear of job losses because of the influence of technology and automation, including AI has loomed large since the 1960s. Over the last year its intensity and focus has increased; the media has echoed and amplified those fears. Tech entrepreneurs fuel our angst. Elon Musk tweets that AI will be ‘smarter than any single human next year’. And, if our anxiety responses are not heightened enough, we have the observation that ‘AI will probably be smarter than all humans combined’ before this century’s third decade has closed. If the technology is more capable than us, what roles will survive and, to thrive, what responsibilities will be required?
Academics have warned of the decline of routine rules-based and process driven roles. We have read the literature and heard the soundbites, but concerns have sat in the background. The voices remained muffled, in part, because the answers to questions remain uncertain and unchartered. And for many of us uncertainty is a place best avoided. However, over the last year or so, the call that tasks and processes will be replaced by AI has sounded more clearly. And it is because of this increasing clamour, echoing off the hard edges of the technological evolution that is with us, that we are being forced to concentrate on and address these profound changes. To survive or, more optimistically thrive, organisations must pay heed to the academic studies that tell us that with the right combination of technologies, most tasks and roles are susceptible to automation. What work means, will change.
Routine and repetitive
Our thoughts, typically, turn to tasks that are routine and repetitive for technology to absorb, freeing humans for more challenging and rewarding work; work that gives purpose and meaning. The image is a warm one, advancing a new working utopia. We can put to one side the drudgery of repetition. The aim is that subject to training, testing and reinforcement review and intervention, repetitive and routine data-dependent tasks will require decreasing levels of human engagement. However, it will not simply be the routine and discrete tasks that are capable of automation. Each of us has played with generative pretrained transformers – ChatGPT being the posterchild. We are now increasingly aware that with the rise of generative AI and increasingly sophisticated non-supervised artificial learning many non-routine, creative and knowledge based tasks – which, until recently, were seen as the preserve of humans and out of reach of the machines – will, also, be capable of AI replication.
Artificial neural networks and machine learning, with their capacity to mine and synthesise huge datasets will accelerate automation, increasing product and service efficiency and, ultimately, quality. Tasks will be replicated by technology. But even as automation advances in relation to data dependent tasks (moving to one side, momentarily, the classic bottlenecks for AI intervention – social intelligence, creativity and human perception), there remains the need for human intervention, even for mundane tasks, to remove misdescriptions, errors or biases.
Social, political and economic challenges may emerge if roles and tasks are removed without being replaced. Indeed, this risk has catalysed a return to the debate on universal income. But for now, as a society we are, broadly and subject to improved regulatory safeguards, accepting of AI delivering tasks previously undertaken by people.
High-risk AI
However, we are less ambivalent about AI that decides on who works or to whom an opportunity is offered, whether via recognition technology – facial or voice – or emotional interpretation. Our apprehension becomes increasingly stark when we reflect on the potential use of AI that seek to make these personal decisions especially if the model’s development has not been shaped by social and behavioural scientists.
The adoption of any high-risk AI must have the benefit of a clear impact assessment to consider its risks and potential adverse consequences, including the consequences of failure. At the very least the model must be ethical, including being explainable, fair and robust. Has the AI model been reviewed and assessed to remove systemic bias? One means to achieve this is to know that the data and variables on which the AI was trained, tested and verified is complete, representative of all people and debiased.
Any high-risk AI model should always have human intervention and oversight to ensure its efficacy and reliability; it should augment human decision-making not replace it.
Changing lives, taking jobs
Against this backdrop of rapid faceless technological change, the absence of regulation, economic uncertainty and the apparent pursuit of profit, the fear of many (especially if a positive counter vision is not provided) is that AI is all consuming in its ability to change lives and take jobs. The language of an existential risk is prevalent. Yes, Andrew Bailey, the Governor of the Bank of England, has looked to change this narrative by advancing a more optimistic outlook, observing that throughout history economies have adapted and created new roles. He might have had in mind that over the course of the second industrial revolution new jobs emerged to replace those lost to mass production: in 1900 over 40% of the workforce was employed in agriculture; now it is 2%.
Impact on employees
As AI becomes an increasing feature of a company’s operational capabilities, workers will want to know what this means for their future. If employees don’t understand this (especially if they don’t have control over adoption and effects), then anxiety about long-term employment and economic insecurity increases.
In turn, leadership teams will be worried about the mental health of their people. There will be many reasons for this, including the following three.
First, if the impact of AI on an organisation is not understood by workforces, this risks building communal vulnerability with all its very human negative side effects – anxiety, fear, distraction, anger;
Second, if AI removes the routine tasks (with an opportunity, dare it be whispered, to slow down) with the consequence that roles become more complex, complicated and ever more challenging, when does a person reflect and rest? And without that rest, how do employees keep going at this increasing pace?
Third, if companies maintain their hybrid and flexible working arrangements, supported by AI and technology – and there are very good reasons why they should, but that is a discussion for another day – there is a risk of further isolation for some.
However, there is hope. The solution is within us. Our unique human capacity for empathy, sympathy, kindness (even directness) will become more important, especially for leaders. Leaders must increasingly look to rely on their emotional intelligence to communicate a clear vision of the future, emphasising ambition but appreciating the concerns of their workforces.
A way forward – inquisitive and not fearful
All of us will look to navigate both the opportunities and challenges, both personally and professionally, posed by new ways of work. We cannot deny a reality that will force our organisations to alter their structures and operational capabilities and processes. To do otherwise is to sit alongside King Cnut with the tide coming in. Companies will look to build and promote a culture that encourages their people to embrace the potential offered by AI. The following three steps might help:
First, when teams are exploring developmental AI projects, before any adoption, it helps that they recognise that both they and the AI technology may fail. That is okay. Indeed, it’s the very key to experimentation leading to innovation. If sensible guardrails are put in place, the risks are mitigated.
Second, AI has the power to make work and the experience of work better. We can only achieve that as individuals by adopting the view that we want to learn and immerse ourselves in its opportunities and possibilities. This takes time: the acquisition of AI knowledge is a continuous exercise not a single event.
Finally, there’s the element of embracing our inner child – be inquisitive rather than fearful. Or put differently – learn to climb trees, to paint, to communicate. So, we should all test, challenge and experiment with AI. This will help us understand the art of the possible and in turn discover better ways of working with AI.