Seattle workshop explores potential dangers of articial intelligence.
A lot of tech-smart people think artificial intelligence (AI) might pose a threat to the existence of humankind. Others think it’s not something to worry about, or not for a very long time, if ever.
Operating on the reasonable assumption that what happens tomorrow depends on what we do today, the White House Office of Science and Technology Policy is holding public workshops in four cities to look at the state of AI and to look ahead at potential benefits and problems.
The University of Washington School of Law and the UW Tech Policy Lab co-hosted the first meeting Tuesday with an emphasis on public policy. What I heard is that what happens with AI is about us — how we design it, manage it and use it. We have to be accountable.
Panelist Jack Balkin, a law professor at Yale University, said the real question is not whether machines might rise up, but how humans might misuse the technology to dominate or oppress other humans.
Most Read Stories
- A Washington county that went for Trump is shaken as immigrant neighbors start disappearing VIEW
- Kickoff time, TV info announced for 110th Apple Cup
- Rebound with redemption: Huskies come back to beat Utah behind the unlikeliest of heroes
- Anthony Bourdain brought 'Parts Unknown' to Seattle — here's where he ate
- Seattle hits record high for income inequality, now rivals San Francisco
Artificial intelligence already is having an impact on people’s lives. Programming allows robots to do some of the jobs humans used to do, and more jobs are being automated all the time. Software is being used to pull together news reports and screen job applicants.
A ProPublica story about software used around the country to predict future criminal behavior kept coming up during discussions. ProPublica found that the risk-assessment algorithm is biased against black Americans. One finding is that the tools had assigned higher risk to scores of black people and lower ones to white people in cases in which the predictions proved inaccurate.
One panelist mentioned an AI tool that helps people find jobs but sent listings of higher-paying jobs mostly to men.
A computer scientist in the audience said he didn’t believe either case represented bias. He allowed that a programmer could write a program that includes his biases, but said that what’s called a learning algorithm would be different. In the latter, humans input examples, then the algorithm learns on its own, rather than simply following programming instructions.
None of the panelists suggested the bias in the systems was intentional or hostile, but rather a matter of not considering how the information that human programmers put into a system could affect results. It’s all about the humans.
Kate Crawford, a principal researcher at Microsoft Research New York City and senior researcher at NYU Information Law Institute, gave an example in which a learning algorithm designed to recognize faces labeled black people’s faces as gorillas because it had been trained using mostly white faces.
Panelists agreed that sometimes having a broader spectrum of people in the room when choices are being made might help avoid inadvertent bias.
Then there’s the question of ethics and morality. If more jobs that affect people’s lives are going to be done by programs, the people who create those programs need to have a new kind of professionalism, Balkin said. They need to be trained to recognize the moral dimensions of their work.
That would apply whether the AI was offering legal advice or determining whether to fire a missile.
Bill Gates, Stephen Hawking, Elon Musk and other tech and science luminaries have expressed concerns about where we’re headed, while others, like Oren Etzioni, don’t see AI as a threat. Etzioni, chief executive of the Allen Institute for Artificial Intelligence and a UW professor of computer science and engineering, spoke at the workshop.
He said AI that does one thing well is everywhere. But the kind of AI we see in science fiction, that can act on its own over a broad range of tasks, is a long way off. He said his children far outclass AI on tasks that aren’t black and white. AI accomplishments that seem amazing are still the work of people.
“We’re just getting started,” he said. “We have time to think these things through.”
I hope we do just that, but we don’t always. How long did it take us to understand the impact we were having on the planet by burning fossil fuels, let alone to begin doing something about it?
We need to keep an eye on us, on what we’re building and how to reap the benefits and avoid the dangers of it.