What is Ethical AI and Why It Matters in HR Tech

April 3, 2023

min read

by

Anna Wang

There are many different definitions of ethical AI, but they can be summarized as “a system of moral principles and techniques to inform responsible development and use of AI technologies. For example, Google’s AI principles include “Be socially beneficial” and “Avoid creating or reinforcing unfair bias.” We should all agree that this is important. So how exactly does a vendor or system avoid creating or reinforcing unfair bias? 

The stakes for this are very high in the HR technology space. AI in HR tech is already popular and growing fast; in a 2021 survey of HR managers at large U.S. companies, 60% said their organizations are currently using AI for talent management and nearly 82% plan to adopt more AI tools in the next five years. Organizations using AI tools need to be confident that those tools won’t introduce bias or create adverse impact. Adverse impact, also known as “disparate impact,” refers to employment practices that appear neutral but have a discriminatory effect on a protected group. Organizations have legal and ethical obligations to avoid discrimination in HR, so any AI tools used for these purposes must meet very high ethical standards. 

So how can this be done? There are several ways that HR tech vendors, users and the community at large can help ensure that AI is being used ethically. I’d like to recommend 3 things that we think are both valuable and realistic for HR tech vendors and users to implement. 

But first, let me walk you through the three pillars of AI:

Modeling -

Modeling takes messy real world problems and packages them into neat formal mathematical objects called models, which can be subject to rigorous analysis and can be operated on by computers. However, the process of building a model always loses some data because not all of the richness of the real world can be captured. So the art of building models is knowing what data to keep and what to ignore. This is one place bias can emerge. 

Inference -

Inference is about answering real-world questions based on a model. The focus is usually on efficient algorithms that can answer useful questions.

Learning -

Machine learning (ML) is the process of discovering patterns in data (which can be more complex than what humans could detect) and then making predictions based on that data. ML can answer business questions, detect and analyze trends, and help solve problems. What makes machine learning so powerful is that the machine can build its own models without being told exactly what the parameters and rules are, and still make predictions on training data to high accuracy. While AI scientists can tune the parameters of machine learning algorithms, the machine is left to make its own generalizations about the world. The input data is absolutely essential, because having an inaccurate training dataset can lead to poor and unfair outcomes when applied to new datasets. For example, imagine that a machine learning algorithm is trained on a dataset of animals containing only small dogs and big cats, and asked to predict the size of an animal when told only whether it’s a dog or a cat. You can imagine that the machine will predict that all dogs are smaller than cats. 

While AI can provide huge benefits in terms of efficiency and effectiveness on many tasks, I hope that my explanation of the three AI pillars have illuminated possibilities for bias that should be actively mitigated. We believe that there are three things that HR tech vendors and companies should do to promote a world with more ethical AI.

Look to the science to build better models

The first way vendors can ensure they are building an ethical AI is to do their homework when building AI models. For over 100 years, researchers in organizational psychology have investigated how to conceptualize and measure organizationally-relevant attributes such as job performance. The richness of research in Industrial/Organizational Psychology and Organizational Behavior helps teams understand what data important to capture. Building models that focus on the most significant factors identified by science will create models that are more likely to produce more useful results with fewer bias issues. We cannot expect to build accurate, ethical models without a rich understanding of the academic literature that’s core to what our tools are doing. 

Use corroborated, sanitized, and relevant input data

Having enough high-quality data is essential to training ML models (the third pillar). This is a challenge because different organizations capture different data about their employee base and measure and store it in different ways. There is very little standardization. This becomes an issue if vendors providing AI solutions are training their models to predict on datasets where the foundational measurements are not the same. Take attrition, for example. If you ask 100 SaaS companies, I would bet that there are at least 20 different ways that they use to measure attrition rates at their company. Therefore, I would not recommend taking a vendor’s attrition predictor algorithm and applying it to your organization until you’ve verified that how the model collects and transforms data is aligned with your internal definitions. Furthermore, trying to run ML without enough input data is another way bias can be introduced into a system. In practice, this might mean that vendors need to avoid building ML capabilities or a user needs to avoid using these capabilities until they have enough  data to get statistically significant results.

Proactively audit and cross-validate

Even with our best efforts to model well and train with high-quality data, AI systems sometimes become biased when deployed in the field. Regular audits (like the Adverse Impact Analysis explained below) and cross-validation of results with external measures (like company performance reviews) allow vendors to detect bias, uncover the cause, and address the problem before customers are significantly affected. 

For example, Searchlight audits our platform quarterly and performs continuous cross-validation in the background. Our most recent adverse impact audit tested whether the Searchlight Score (the rating that the Searchlight platform calculates to predict future effectiveness on-the-job) differed, on average, for different genders and demographic groups. Essentially, we checked for bias. 

Our latest analysis found no adverse impact in Searchlight Score based on gender or ethnicity. We’re happy to see that the safeguards we’ve put in place to guard against bias in our platform are working as intended. The results showed that Searchlight does not unfairly value certain groups and our prediction of candidate performance remains valid.

Ethnicity

Gender

Building ethical AI and controlling for bias is crucial as this technology becomes more widely used in different parts of the HR tech stack. The steps explained above will help ensure that the HR tech community is building solutions that help reduce bias and unfairness in the world and help our technology live up to our ideals.

Anna Wang

CTO

Anna, our co-founder & CTO, merges engineering and AI with a passion for fiction and history. Leads teams for faster, right-fit hiring insights.

Table of contents

Text Link

See how Searchlight can find the right person forevery role

Book a Demo

Related posts