Home » Knowledge Hub » HR AI » Q&A: AI in HR – The Perfect Balance
Q&A: AI in HR – The Perfect Balance
26 May 2023 HR AI
This month The HR World’s webinar, sponsored by Cognisess, took careful aim at the rise of Artificial Intelligence.
With three specialists coming together to discuss developments and challenges this was a chance for HR to get to grips with this fast emerging technology. The jury is still out on precisely how AI will impact businesses, jobs and indeed life in general, but amid the speculation the HR function must be informed and prepared to take action.
You can view the entire webinar here – and you’ll find the discussion is back-to-back with learning points and considerations for HR. Our panellists were Sue Turner OBE, Founder and Director – AI Governance Limited, Jared Skey, Chief Growth Officer – Cognisess and global learning and development leader Lloyd Dean. Below they tackle three further questions raised by our audience on the day that we didn’t have time to address:
1: How can HR and business move beyond the fear based level of engagement into a world where we fully embrace it as a beneficial technology?
Lloyd Dean: I love to ‘AIM’ (Acquire insights, Integrate technology and Monitor outcomes) It doesn’t matter what the technology is, it becomes a framework to innovate and try new things in this area. Start with acquire and you could include team members doing research, finding ways AI could improve processes or even forecasting change. This is a good place to start and begins to embrace and understand the technology and the benefits it can bring to the organisation.
Sue Turner: Fear is often routed in lack of confidence so the first step is to invest in your own development so you understand what AI can (and can’t) do. Secondly I advise clients to look at the data you have in the organisation and, concurrently, think about what business problems you think you might be able to solve if only you could predict, personalise or automate some aspect of what you do. From this you can draw up a list of potential projects and think about the risks, ethics and governance issues that they may bring with them. Finally, pick on one of your ideas – I suggest starting with something not too complex – and try out using AI. We know that many projects fail so set your goal for this project to learn rather than demanding it be an out and out success.
Jared Skey: It comes back to transparency. And this is broader than AI adoption. So many business processes are shrouded in secrecy: How do I get promoted? How do I change roles? How do I share ideas? How do I create initiatives? Employees are often invited to not rock the boat. To just trust ‘the management’. Any organisations that adopt a command and control style have embedded fear as an intrinsic part of the model.
All businesses establish a psychological contract with their employees. The nature of this contract, for better or worse, sets the tone for the relationship between employer and employee. If it is not based on mutual trust and respect then all employees will be fearful of change. The fear that prohibits wholesale, enterprise adoption of these technologies is just another proxy for a deeper mistrust.
All AI technologies are reliant on data. To get past the trust blockage, to get employees to willingly share their data, organisations must answer the question: what’s in it for me as an employee? Organisations need to show where the value exchange is. Only then can we really set AI to work.
2: You mentioned the need for an ethical and moral discussion around the use of AI by HR. Do you think HR is ready to have – or lead – these kinds of discussions?
Jared Skey: We are in un-charted territory.
Because we are talking about people decisions being influenced by AI, the natural home for this is in HR. But the conversation is far broader than HR. This is a board level conversation. The question is: How do we want to treat our people? The answer must be defined by business leaders and supported by HR. The difficulty comes when business leaders are unaware of the technologies that are available and the moral/ethical implications of using, or not using, them.
HR are the pioneers, the explorers, the people who will have first contact with these technologies. HR must be prepared to bring the technologies and the implications to the table. It is then a business decision about whether or not to adopt them, and consequently how this decision defines what you stand for as a business.
This is too big a conversation for HR alone and emphasises the need for greater integration between HR and the business resulting in deeper alignment of the corporate and HR strategies.
Lloyd Dean: The challenge is going to be the confidence of HR to competently talk about AI tech. I think – long-term – there’s an argument to blend aspects of the IT department with HR. This will promote, understand and enable IT to consider HR aspects and vice-versa.
Sue Turner: It’s a good question! I don’t come across many organisations that are totally ready to talk and think about ethics, which is not surprising as the education we receive to become leaders historically very rarely featured training in business ethics (though this is changing for young people currently undergoing leadership training). That’s one of the motivations for creating AI Governance – to help fill the gap in leaders’ education and practices. Having said that, I do think HR leaders are often in a better position than other leaders to raise issues over the potential impacts of AI. One of the core principles of AI governance is to identify instances of ethical choice. Many of these occur when AI impacts on people (internally and externally), so for leaders who are well versed in thinking about people, DE&I, cultural impacts on people etc it is not such a big step to initiate discussions with colleagues that explore how what you might do with AI could affect people (employees, customers, wider stakeholders and society at large).
There’s a real example from Xerox where data analysis found that employees with long commutes tended to end their employment sooner than employees with shorter commutes. The management could have decided not to recruit people with longer commutes, but they were wise enough to spot that, because the company was in an affluent area, not hiring people who lived further away would amount to discrimination against lower income people. They wanted to give lower income people opportunities so, even though they wanted to reduce their attrition rate, they did not include commute distance in their hiring criteria. These new AI tools are powerful; we need to use them with wisdom and integrity.
If you are interested in receiving training about AI in HR there are two bite-sized courses on the AI Governance website.