What Is Machine Learning Fairness? What You Need to Know

Written by Coursera Staff • Updated on

Here’s what you need to know about machine learning fairness and incorporating ethics into an increasingly automated world.

[Featured image] A machine learning engineer is looking at a program to ensure it practices machine learning fairness.

Key takeaways

Machine learning fairness is an important consideration when developing machine learning models to help prevent bias within models.

  • By avoiding bias within the data you use to train your model and implementing machine learning fairness tools, you can develop fair machine learning models.

  • Fairness in machine learning models can help protect groups of people from prejudice and unfair outcomes. 

Learn how machine learning fairness can help your model produce ethical outcomes. If you’re ready to start developing in-demand machine learning skills, earn an IBM Machine Learning with Python and Scikit-learn Professional Certificate, where you can practice training neural networks, work with popular machine learning libraries, and code your own projects.

What is fairness in machine learning?

Machine learning fairness is the process of correcting and eliminating algorithmic bias (of race and ethnicity, gender, sexual orientation, disability, and class) from machine learning models. Machine learning is a branch of artificial intelligence (AI) that stems from the idea that computers can learn from data collected to identify patterns and make decisions that mimic those of humans with minimal human intervention.

Watch this video for an introduction to algorithmic fairness:

Why is it important to address fairness in machine learning?

Unintentional discrimination in machine learning algorithms is just one reason why addressing fairness and AI ethics is important.

Machine learning is enmeshed in the systems and applications we use to help us buy furniture, find jobs, recruit new hires, apply for universities, listen to music, get loans, find news, search on Google, target ads, and so much more. It enhances our ability to streamline information and provide recommendations, but it can have serious consequences if it is trained on the wrong information and fails to promote fair and equal practices.

To remove these potential biases, data scientists and machine learning experts must look out for them in algorithmic models and correct them. Because machine learning, by definition, learns by example, it can also "learn" to avoid bias as long as it is given the correct data. Used in industries as varied as the criminal justice system to corporate human resources, to credit lending, it's important that machine learning adopt fair and ethical processes.

Read more: Understanding AI Bias

Case study: The COMPAS controversy

COMPAS is a decision support tool that was developed by Northpointe and used by the US court system to assess the likelihood of a criminal being a repeat offender (recidivist). Its algorithm predicts which criminals are most likely to re-offend, taking a quantitative approach to fairness that sparked controversy.

3 ways to promote fairness in machine learning

For those working in data science and artificial intelligence with algorithms, there are a few ways to make sure that machine learning is fair and ethical. You can:

  • Examine the algorithms’ ability to influence human behavior and decide whether it is biased. Then, create algorithmic methods that avoid predictive bias.

  • Identify any vulnerabilities or inconsistencies in public data sets and assess whether there is a privacy violation.

  • Utilize tools that can help prevent and eliminate bias in machine learning.

You can learn these technical skills and more with the Ethics in the Age of AI specialization from LearnQuest:

Tools for machine learning fairness

There are plenty of courses, tools, and processes available to help you integrate machine learning fairness into your organization’s workflow and prevent machine learning malpractice. Here are a few you can check out:

  • IBM’s AI Fairness 360: A Python and R toolkit of technical solutions on fairness metrics and algorithms that help users and researchers share and evaluate discrimination and bias in machine learning models.

  • Google’s What-If Tool: A visualization tool that explores a model’s performance on a data set, assessing it against preset definitions of fairness constraints. It supports binary classification, multi-class classification, and regression tasks.

  • Google’s Model Cards: This tool confirms that a given model’s intent matches its use case and helps users understand the conditions under which their model is safe and appropriate to move forward with. 

  • Fairlearn: An open-source Python toolkit that assesses and improves fairness in machine learning. With an interactive visualization dashboard and unfairness mitigation algorithms, this tool helps users analyze the trade-offs between fairness and model performance.

  • Deon: An ethics checklist that facilitates responsible data science by evaluating and systematically reviewing applications for potential ethical implications from the early stages of data collection to implementation. 

Explore our free resources for machine learning

Interested in learning more about machine learning skills? Check out some of our free resources to stay up to date with the latest industry developments, like our LinkedIn newsletter, Career Chat. You can also keep up to speed with our free resources below:

Whether you want to get comfortable with an in-demand technology or advance your abilities, keep growing with a Coursera Plus subscription. You’ll get access to over 10,000 flexible courses. 

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.