Home » Knowledge Hub » HR AI » AI-Based Cyber Attacks Target HR Teams
AI-Based Cyber Attacks Target HR Teams
23 October 2023 HR AI
Story by
Robert O’Brien Chief Evangelist and Founder of MetaCompliance
For Cyber Security Awareness Month, Robert O’Brien, Chief Evangelist and Founder of MetaCompliance discusses safeguarding your HR department in the age of incessant threat.
Imagine you had the opportunity to hire a Talent Management professional whose qualifications were unparalleled. They possess not only an in-depth understanding of HR systems, people management processes, and recruitment strategies but also an extensive knowledge of sociology, behavioural economics, and a myriad of other skills. Enter the era of AI, the epitome of this visionary professional.
Whether you embrace it or resist it, AI is an indomitable force that is here to stay, and its impact is set to dwarf the transformative influence of the internet in the workplace. The world of tomorrow, shaped by AI, will make our current interactions with technology seem as rudimentary as child’s play.
Most organisations are currently grappling with the challenges posed by employees incorporating AI, such as chatbots and GPTs, into their daily work routines. These challenges encompass a wide array of issues, with two critical concerns rising to the surface: PR vulnerability and the fallibility of AI responses, which are at the forefront of the problems organisations must navigate as they embrace AI adoption.
AI in the hands of cybercriminals
On the flip side, cybercriminals exhibit no hesitation in embracing AI and are eagerly leveraging this technology to amplify their assaults on organisations, motivated by both mischief and financial gain.
For specific departments in the organisation, this enthusiasm for AI-related cybercrime has led to a surge in targeted attacks, with the Human Resources Department of particular focus. Cybercriminals are harnessing the wealth of knowledge provided by AI to impersonate HR personnel, their trusted suppliers, and other high-ranking executive functions. This deception enables them to infiltrate confidential data stores and exploit the authority of the HR department, often manipulating privileged interactions within the organisation for their deceitful ends.
Traditionally, cyber threats involved remote hackers employing social engineering techniques or leveraging vulnerabilities in outdated software systems. While these methods are still prevalent, the advent of AI technology has opened a Pandora’s box of possibilities for cybercriminals. This evolution is driven by the increased sophistication of AI, allowing it to automate and enhance the effectiveness of various cyberattack vectors.
The potential consequences of AI-driven attacks are nothing short of alarming. We’re no longer dealing solely with stolen passwords or isolated cyber incidents. Instead, we face a multifaceted threat landscape that can have devastating repercussions for organisations and individuals alike. Among these consequences, three aspects loom large: data breaches, reputational damage, and legal implications. Each poses a unique set of challenges for HR teams and the organisations they serve.
In a recent survey conducted in August 2023, involving 205 IT security decision-makers, undertaken by a prominent pan-European cyber security organisation, it became evident that mounting concerns surround the use of AI, with deepfakes taking centre stage. A staggering 68% of the respondents expressed apprehension regarding cybercriminals exploiting deepfake technology to breach their organisations, skillfully circumventing people’s natural defences.
But here’s the stark reality: hackers now have their own AI arsenal, and it goes by the name of WormGPT. Drawing from a vast corpus of human-generated text, WormGPT crafts content that is remarkably convincing, enabling it to masquerade as a trusted figure within a business email system.
Unbelievably, hackers can gain access to WormGPT by subscribing through the dark web, granting them entry to a web interface where they can input prompts and receive responses that closely mimic human communication. Primarily designed for phishing emails and business email compromise attacks, tests conducted by researchers uncovered that this chatbot possesses the ability to draft a persuasive email, seemingly from a company’s top executive, coercing an employee to pay a fraudulent invoice etc.
Protecting HR teams
Confronted with these ever-evolving threats, the question is: What can HR leadership do to shield their teams from such threats?
First and foremost, it’s crucial to acknowledge that HR departments stand as prime targets for cybercriminals. These departments manage personal data and hold confidential information that is immensely valuable to malicious actors. Moreover, other parts of the organisation often take their cues from HR, making it a tempting gateway for cybercriminals to exploit their access to the broader network.
The first step in fortifying your HR team against these threats is to initiate a dialogue with HR team members. Educate them on how they are being specifically targeted and empower them with the knowledge needed to thwart these scams. It’s vital that this training is tailored to your organisation, taking into account the unique roles and responsibilities of your employees. Ideally this training should be tailored to the HR department, highlighting the unique threats to the HR team and what they can do to avoid them.
To make this training even more impactful, ensure it’s delivered in their native language. By doing so, you reduce resistance and enhance engagement, making it a vital component of their cyber security awareness. Ultimately, this personalised approach to security awareness is your best defense in safeguarding your HR department from the relentless tide of AI-driven cyber attacks.