Senate Bill 5527 defines an automated decision system (ADS) as "any algorithm, including one incorporating machine learning or other artificial intelligence techniques, that uses data-based analytics to make or support government decisions, judgments or conclusions."

Share story

You may not know it, but computer programs are making decisions about you every day – and not just about which news articles you should read or which ads you see online. These programs can deny you a loan or reject your job application. They can put you in jail or trigger a visit from child protective services.

These programs hold the promise of advantages over human judgment by reducing costs, reducing errors and increasing transparency. But there is overwhelming evidence that they can reinforce racial and gender discrimination and widen the economic gap.

The program itself is not the problem, it’s the historical data that the program uses. If a company has a historical gender bias in hiring, any program that uses that data may propagate that bias, as Amazon recently learned. If a criminal-justice system has a historical-racial bias, any program that uses that data may propagate that bias, as we all recently realized. The list goes on and on.

Do you have something to say?

Share your thoughts on the news by sending a Letter to the Editor. Email letters@seattletimes.com and please include your full name, address and telephone number for verification only. Letters are limited to 200 words.

State Senate Bill 5527 defines an automated decision system (ADS) as “any algorithm, including one incorporating machine learning or other artificial intelligence techniques, that uses data-based analytics to make or support government decisions, judgments or conclusions.”

The authors of the bill correctly capture the issues: that ADS can amplify, operationalize and legitimize discrimination and opacity – “If the computer said so, it must be right.”

For example, the criminal-justice system nationwide is increasingly adopting risk assessment tools to predict recidivism and bail violations, and these predictions are used by judges to inform sentencing. But these tools are often based on fundamentally flawed assumptions, such as equating arrest patterns to crime patterns, that bail violations indicate flight risk rather than access to transportation or other socioeconomic challenges.

Moreover, these tools have been seen to be deployed without independent external review, without community involvement and sometimes even without rigorous evidence that their predictions are accurate.

In response, SB 5527 requires ADS deployed in the public sector to demonstrate that they do not discriminate, that they adhere to third-party recommendations and that they provide simple explanations of decisions. These are good ideas being actively discussed in the academic research community and in industry, with leadership from tech companies that have a strong presence in the state.

Washington is a national and global leader in technology. But tech leadership in 2019 means innovative policy that moves as fast as innovative technology. The goals of SB 5527 are sound: reduce uncertainty about adoption of ADS in the public sector, and create economic opportunities for products and services that can certify compliance for these systems.

The cost to technology developers to meet these conditions will be mitigated by the significant investments already being made to meet the European Union’s General Data Protection Regulation and to respond to increasing customer awareness about algorithmic fairness and transparency. Passage of a bill along these lines will set an example for other states and cities to similarly encourage public-private partnerships in the use of ADS, while protecting civic interests.

Despite these strengths, the definitions of ADS used in the bill can be construed too broadly: any decision system that involves any kind of calculation could potentially be included, whether it involves the technologies targeted by the bill.  At the University of Washington, decision systems spanning procurement, admissions and even grading could be considered ADS.

While certainly some of these systems can and should be reviewed for transparency and discrimination, that isn’t why this bill was written. The definitions of ADS should be reframed to focus on those cases where algorithms are replacing human review.

It is this automatic decision-making at scale, with limited opportunity for human oversight, that makes a biased ADS dangerous.

I am confident that a bill like SB 5527 can be passed in Washington that will focus resources on preventing risks in these cases, without imposing prohibitive costs to review existing decision systems that involve human oversight.

Correction: This Op-Ed was originally published with an outdated author biography. It has been updated.