Leadership

AI, LLM, and Ethical HR

Various lines connecting two circles form the shape of a human brain.
Quinto Content Team
This is some text inside of a div block.
Min. Read
September 26, 2023

How do we protect the “human” in human resources as our discipline is flooded with machine intelligence?

Since large language models (LLM) and other types of generative artificial intelligence (AI) burst onto the scene, the headlines have alternated between awed praise and apocalyptic warnings. Given the unprecedented power of these technologies, both reactions are equally warranted. 

For HR (the only profession, after all, that contains the word “human”), the prospect of assigning not only repetitive tasks but also cognitive ones to machine intelligence is rightfully being approached with caution. After all, HR is directly responsible for people’s careers and livelihoods.  

In this article, we’ll examine the key ethical risks and considerations for HR professionals who plan to integrate AI into their practice. 

AI is already changing HR

AI—and generative AI in particular—is still fairly new, but it has already made a big impact on HR. The World Economic Forum estimates that there are over 250 different AI-based HR tools on the market, and HR managers are eager to implement them. According to the Human Resources Professionals Association (HRPA), 12% of HR professionals are currently using AI to complete HR functions, while 13% have plans to do so. Of those, 63% plan to use or are already using generative AI for writing job descriptions. 

Despite the potential risks, HR managers are generally upbeat about AI. In a recent Conference Board survey, 65% of CHROs reported that they believe AI will have a positive impact on the human capital function within the next two years.

If you're a talent professional who uses AI to drive your talent processes, or if you have plans to adopt AI technology in future, here is what you need to know. 

Bias in the machine

AI technologies are already helping HR teams save time and increase efficiency. AI can sift through hundreds or even thousands of resumes in seconds to identify high-quality candidates, for example. It can reduce the impact of personal preferences and biases in hiring, assessment, and promotion. And it can analyze and spot patterns in workplace data that humans would almost certainly miss.

But in other cases, AI has actually reinforced human bias. One of the earliest initiatives, an Amazon program that used machine learning to evaluate candidate resumes, was quickly scrapped because it discriminated against women. The problem was that the machine was trained on a decade’s-worth of resumes for the company’s male-dominated workforce.

HR technologies have also been found to discriminate against disabled workers by labeling their performance "not-standard" and exclude qualified but disadvantaged candidates (such as those who are immigrants, veterans, or neurodiverse).

Unfortunately, whether it was the contribution of a human or a machine that created a biased process, the company is liable for any harmful effects.

Combating the potential for bias in AI 

Ultimately, the enthusiasm about AI needs to be balanced by an awareness of the harm it can cause when its use is not governed by ethical principles. HR leaders must protect and further the best interests of the employees they manage, and to do so, they need to closely examine what AI should do as well as what it can do. 

Here are some of the ways to bring greater transparency and accountability to this rapidly evolving technology. 

1. Explainable AI

AI can deliver impressive results, but it also has a reputation for being a "black box." In other words, it's not always clear exactly how those results were achieved. What sources were consulted? What validation processes were applied? If you can’t see the journey, it’s hard to fully trust the destination. 

“Explainable AI” is a term used to describe a set of tools and frameworks that helps developers and users break open that black box so that they can see and understand the processes, defend the outcomes, and troubleshoot problematic areas. 

As AI becomes more sophisticated and its calculations become more complex, the need for transparency increases, especially when the results are used to make decisions about people’s careers.  

HERE’S AN EXAMPLE: Quinto uses big data and AI to generate job description content, which is a similar process to that supported by LLMs such as ChatGPT. Like ChatGPT, Quinto can analyze, contextualize, and interpret content on a granular, word-by-word level, not sentence by sentence. This ensures greater precision, but also requires a more complex calculation. To monitor this process and ensure it produces outcomes that we and our customers can feel confident about, we document every step, including the way the algorithm was trained, the model it creates, and the results it returns.

2. Human mediation

By scanning and analyzing large volumes of data automatically, AI enables HR teams to save time and resources. But AI still needs humans to guide the learning process, monitor the quality of the output, and adjust the process as needed. This "human-in-the-loop" process is essential to the quality of the data today and the quality and sophistication of the AI engine over time.

HERE’S AN EXAMPLE: The algorithms used in Quinto are carefully trained and extensively reviewed by senior subject matter experts. This ensures that the output of machine processes are guided and monitored by specialized human expertise. Quinto is also designed to support internal collaboration that enables users to validate and fine-tune AI-generated content with input from job incumbents, managers, and experts.

3. Data quality

AI actively learns from the data it ingests. If that data is poor quality, incomplete, or taken from biased sources, the output will be similarly compromised. And the quality will deteriorate further over time, as the AI continues to build future analyses on a faulty data foundation. To ensure the quality of the output and the learning process, AI needs access to the most complete, inclusive, and representative data sets. This is sometimes referred to as "big data," or data sets that are so large, varied, and dynamic that traditional data processing methods can't manage them. 

HERE’S AN EXAMPLE: Quinto integrates job data directly from one of the largest job-posting sites in the world to create a robust data set for the algorithm to learn from. The system ingests approximately 30,000 job posts a day (nearly 10 million a year). In addition, the Quinto team reviews these sources to ensure the data is diverse and representative, and augments these sources manually where necessary.

4. Governance frameworks

When AI is poorly managed, it has the potential to harm the humans it was intended to help, especially when it is entrusted with vital functions such as recruiting, promoting, and compensating talent. To minimize the risk of harm, HR leaders and technology vendors are establishing formal governance frameworks to guide the development and oversight of the AI technologies they create and use.

There are many national and international standard-setting bodies that have developed frameworks, guidelines, and best practices for AI. Here are a few that the Quinto team consults as we refine our own AI guidelines.

5. Continual oversight

AI is like a living organism. It learns and evolves over time, and the data sets it analyzes also change. Because of this, the quality of the outcomes can drift (degrade) over time if the system isn't checked regularly. When it comes to HR, many influential factors can change rapidly, from the economy to the job market to role requirements. AI is by no means a “set it and forget it” technology, and it’s crucial for humans to monitor the outcomes, collect feedback, and adjust the algorithms on an ongoing basis.

HERE’S AN EXAMPLE: The Quinto technical team regularly collects feedback from customers as well as customer support, customer success, and implementation staff to monitor the quality of job descriptions, competency profiles, and career paths generated by AI. System outputs are also reviewed by senior HR consultants at HRSG, our parent company.

Ask AI tech vendors these questions

If you currently use AI in your HR processes or plan to do so, make sure your organization is investing in technology that is ethically designed and maintained. By asking vendors these questions, you can flag potential AI risks and protect yourself, your employees, and your organization from the effects of unethical AI.

  • Have you ever identified evidence of bias in the results generated by your AI? If so, how did you address it?
  • How is the data used to train your AI generated? Is it created by experts in the field? What steps do you take to ensure it doesn’t promote personal biases?
  • How do you ensure that the results your AI generates can be interpreted by and is explainable to developers and users?
  • How are human review and monitoring processes built into the management of your AI?
  • How do you ensure that the data sets ingested by your AI are representative, complete, and reliable? Are there plans to review or expand the data sets at any point?
  • Do you have an AI governance framework in place? How and when was this framework developed? 
  • How frequently do you check the quality of the outputs generated by your AI? What does this review process look like?

Creating a virtuous circle with AI 

AI brings exciting capabilities to HR, especially when it comes to one of the most time-consuming and fundamental processes: creating effective job descriptions. 

However, it’s easy for AI to veer off course unless it’s carefully managed and monitored. AI is a powerful tool, which means its impact can be powerfully beneficial or destructive depending on whether it’s used correctly or incorrectly. Yet according to the HRPA, while 33% of HR organizations do have plans to use generative AI, a mere .5% have a formal policy addressing its use in the workplace. 

As HR teams rely more frequently on AI—and especially on generative AI and LLMs—HR leaders need to understand and implement the checkpoints and processes that make AI more transparent, accurate, and accountable.

See how easy it is to create validated, inclusive, impactful job descriptions.

Request a Demo