Home » Knowledge Hub » HR AI » Risky Business or High Reward?
Risky Business or High Reward?
05 May 2023 HR AI
Story by
Sophia Zand Associate, Wilsons Solicitors LLP
A form of artificial intelligence (AI), ChatGPT, has recently been making headlines. ChatGPT is an AI chatbot that mimics human conversations and can answer questions, write essays, act as a personal assistant and even write code – all within seconds. It was made free in December 2022 which has bolstered its meteoric rise.
But how is ChatGPT and other forms of AI being used within the workplace and what are the risks?
How can AI be used in the workplace?
Many large-scale organisations are using AI to help develop their businesses, save time and maximise efficiency. Your organisation may already use or plan to use AI to respond to customer enquiries, track customer data or for research and processing purposes.
Even if your organisation does not use AI, your staff members may not be taking the same approach. Given its wide and possibly endless capabilities, ChatGPT can be used in almost all industries. It can draft letters to customers, prepare lesson plans for teachers and produce contractual terms to suppliers. To test its capabilities, I asked it to write this article – and it did a fair job of it.
What are the risks in using AI/ChatGPT?
The six most pressing risks are as follows:
- AI can make mistakes in the information it produces. ChatGPT cannot always identify the difference between fact and fiction and it can be difficult to verify its answers as it does not provide its sources.
- ChatGPT currently only uses data that has been stored up until September 2021. It does not know information, events, news, laws or regulations that came after this date, which may result in gaps to chatbots’ “knowledge”.
- Copyright issues. As ChatGPT does not reveal its sources, there is a possibility that its responses may include copyrighted information.
- Data protection. There are concerns in Europe as to whether ChatGPT complies with the General Data Protection Regulation (known as GDPR) and both Italy and Spain are investigating. Whilst the UK is employing a “pro-innovation approach to AI” the ICO has reminded organisations using AI they must ensure that they comply with their data protection obligations. You can find the ICO guidance on AI here and their blogpost here.
- Inadvertent bias and discrimination. In some instances, AI can give biased responses which, if relied upon, could result in discrimination. Organisations should ensure that any decisions and actions made by AI are thoroughly checked.
- ChatGPT learns from each conversation, and it is unclear if these conversations are “secure”. This could give rise to significant legal risks if the user has disclosed to ChatGPT confidential information.
What can employers do to reduce the risks?
ChatGPT is an extraordinary tool and employers of all sizes will need to consider their stance on its usage as its technology continues to grow.
- Carry out a risk management review of AI tools in use. How does it work within your organisation’s operations? Does it meet your objectives? Are there clear guidelines in place? Does the tool comply with data protection laws?
- Decide if staff are permitted to use AI tools or ChatGPT in the workplace. Carry out a risk assessment and evaluate:
- if staff are accessing ChatGPT at work;
- if its usage complies with your data protection and privacy policies or in any other way might compromise your commercial (or other) activities); and
- if it raises any cyber security risks
- Tell staff your decision. Some employers have decided to ban ChatGPT entirely (allegedly JPMorgan and Accenture). If you have decided to allow staff to access and use ChatGPT, draft a policy setting out clear guidelines on:
- when it can be used e.g. for planning purposes only
- what information can be shared to ChatGPT and that the sharing of confidential information, trade secrets and personal data is prohibited
- the need to ‘fact check’ ChatGPT’s responses; and
- remind staff to comply with your existing data protection and privacy policies.