Algorithms have taken a lot of heat recently for producing biased decisions. For example, people were outraged over an recruiting algorithm that overlooked female applicants.

Should we be alarmed by these biases? Yes, of course. But, ultimately, it’s the ways organizations respond to their own algorithms that will determine whether they make strides in de-biasing their decisions.

So far, the typical response when these biases surface is for the media to scapegoat the algorithm while the company reverts to human decision-making. But this is the wrong approach to addressing bias. Rather, organizations should use statistical algorithms for the magnifying glasses they are: Algorithms aggregate individual data points in order to unearth patterns that people have difficulty detecting on their own. When algorithms surface biases, companies should seize on these “failures” as opportunities to learn when and how bias occurs. If they do, they’ll be better equipped to de-bias their practices and improve their overall decision-making.

When algorithms surface biases, companies learn about their past decision processes, what drives biases and which irrelevant information distracts the organization from useful information. Companies can apply this magnifying glass strategy to any important decision process that involves predictions, from hiring to promotions.

Leveraging algorithms as magnifying glasses can also save organizations time. For instance, if a department hires two people each year, it may take a while before the organization realizes that this department of 10 consistently only includes one woman. An algorithm analyzing such infrequent decision-making could figure this out much faster. Making their biases glaringly obvious gives organizations the opportunity to address the problems behind them. The alternative is continuing with business as usual, letting bias seep into virtually every hiring and promotion decision.

Once biases are detected, organizations can correct biased decisions in three main ways. The first may be the most difficult. It involves creating better input data for the algorithm, which starts with changing hiring practices. Second, organizations can continue to use the same historical data but create new rules for the algorithm, such as including a variable that specifies diversity. Third, organizations can examine how existing input variables may introduce bias or consider new, more appropriate input variables.

In the end, algorithms are tools. People build them, determine if their output is accurate and decide when and how to act on that output. Data can provide insights, but people are responsible for the decisions made based on them.

(Written by Jennifer M. Logg, an assistant professor at Georgetown University​.)