Home » Knowledge Hub » HR AI » Time to Act
Time to Act
11 March 2024 HR AI
Story by
Michelle Moody & Merve Gozukucuk-Ugurlu /Managing Director & Senior Manager at Protiviti
Michelle Moody, MD of Data &AI and Merve Ugurlu, SM of Data & AI for Protiviti Ltd discuss the impact of the new EU AI Act on HR policies and procedures.
The EU AI Act was initially proposed in 2021 and has been in various stages of drafting and discussion across the EU members since that time. In Dec 2023, after a marathon set of talks, rules were agreed as part of the Act to govern AI systems, including safeguards on usage and limitations on adoption in some cases. Complaints can be raised, and fines could be given to organisations that violate the rules. The legislation is not expected to take effect until 2025. In the meantime, this will give organisations time to look at the rules and how they intend to adopt AI safely.
We see similar regulatory attempts in the US to secure the safe development and adoption of AI. In 2021, the National AI Initiative Act was enacted at the Federal level of the US to coordinate AI research and development across the Federal Departments and agencies to ensure the ethical and trustworthy development and use of AI in public and private sectors. A further step toward regulation came in 2023 when President Biden published an Executive Order on the safe, secure, and trustworthy development and use of Artificial Intelligence. This Order tasks federal agencies with identifying and managing the risks of AI initiatives and technologies as well as promoting the benefits. These government agencies will develop new standards and rulesets to address the implications of AI in the private and public sectors whilst boosting research and development, calling on Congress to pass a Federal Privacy Bill. In addition, the National Institute for Science and Technology (NIST) published its holistic ‘AI Risk Management Framework’, introducing four core pillars – Map, Govern, Manage and Measure – as the building blocks of the AI lifecycle. This is expected to help create the guardrails organisations need to innovate safely.
EU and US similarities
With a closer look, one sees similar principles and considerations embedded into the legislations and artefacts produced by the policymakers both in the EU and the US. Key pillars such as transparency of the scope and aim of AI, privacy rights of individuals, explainability of the logic behind the algorithm, preventing bias in the training data and/or in outcomes, enabling information security, and ensuring ongoing monitoring and controls are promoted and prescribed by both EU and US legislations. As AI takes hold of the market, it will be interesting to see how regulation develops over time.
Humans still required
In the context of the Act and HR use cases, there are areas where AI is becoming more prevalent, for instance, in recruitment where CVs are vetted or perhaps with initial candidate screenings. Generally, there will be rules around transparency and making sure that the tooling and the machine learning rules that are applied over time are fair and explainable. This will mean that companies investing in these types of tools will still have humans responsible for reviewing and monitoring the datasets which are used to train the models and checking the outcomes of the vetting or screenings so they are accurate, fair, explainable and don’t discriminate.
Privacy matters
AI tools could be used for monitoring employee performance or behaviours, and this could potentially violate the data privacy rights of the individual. Processes and procedures would need to be introduced to ensure decisions and measurements are ethical. With this in mind, HR policies would need to be updated to provide guidelines on how those tools would be used so it is transparent to the employee. It could also be that the individual could decide to ‘opt in’ or opt out’ of the monitoring depending on what is being measured. Equally, the individual may be able to request the outputs to understand their personal measurements and how the rules were applied to them. There are grey areas to still be flushed out in this space.
The same logic will apply to areas like training and development for employees or automated decisions about promotions or terminations, for example. Organisations adopting AI tooling will need to update policies to ensure that the use of AI models are transparent, fair, and ethical in nature. There should be clear processes in place for raising concerns or requesting details on how the models are applied to an individual. The accountability for model accuracy sits with the organisation and with the human employees who are ultimately responsible for managing the data inputs and outputs. With this in mind, HR business units adopting AI tooling should undergo training on the models regularly and enhance their data literacy and critical thinking skills.
Change and evolution
As AI models are adopted over time, the roles in HR, as well as other business units deploying models, will change to be more data focused with data ownership and stewardship at the heart of their roles. Clear policies and processes/procedures will need to be put in place across HR to make sure that employees or candidates are aware of how their data will be processed and that there is a clear process for raising concerns and opting in or out in cases. HR functions will need to look at ongoing data literacy training that focuses on data, ownership, privacy, and retention of data, as well as ethics and regulations of the AI models being deployed so that the outcomes are transparent, explainable, and fair.
This is a fast-moving topic with the adoption of AI at a high, new technologies and an evolving regulatory landscape. It will continue to evolve into the future as AI becomes more commonplace across organisations. A key area that organisations need to focus on is their data estates. If the organisation has issues with their data in terms of quality, completeness, etc., then the best thing they can do is focus on data governance, ownership, and a data literacy programme. AI models are trained with datasets and patterns in those datasets. People at all levels of the organisation will need to understand how they work to innovate safely and enjoy the benefits of AI.