Kate Crawford: “The rich fear the machine rebellion; they have nothing else to worry about.”

Kate Crawford: “The rich fear the machine rebellion; they have nothing else to worry about.”
South Pacific
AustraliaAustralia

 

The Microsoft researcher fights against the social inequalities generated by algorithms and artificial intelligence.

Kate Crawford Gives a Speech on Artificial Intelligence | Technology

Microsoft researcher Kate Crawford in Madrid. Video: Crawford's speech on Artificial Intelligence at the AI Now 2017 Symposium.

Kate Crawford (Sidney) doesn't say the year she was born. Any company could use that information to try to sell her a product or even influence her voting intention. "Where you live, your age, your gender, or even your friends... it seems like trivial information, but you have to be aware of what they can do with it," she explains. Her fight isn't to make tech companies pay for the use of personal data, but to bring to light the social problems arising from technology. Crawford studies how algorithms marginalize minorities. In addition to her work as a researcher at Microsoft , in 2017 she founded the AI Now Research Institute with other colleagues from New York University , an independent institute that aims to help governments correct the inequality biases of their algorithms.

Her goal is to put an end to so-called black boxes, automated and completely opaque systems used by governments to decide fundamental issues affecting people's lives, such as who to award long-term care benefits to. "No one knows how they work or the criteria used to train these machines," says the expert, who was commissioned by the Obama Administration to organize a conference on the social implications of artificial intelligence in 2016.

Crawford participated last week in the Conversation on Artificial Intelligence and its Impact on Society , organized by the Ministry of Energy and Digital Agenda in Madrid, where she presented the conclusions of her report Algorithmic Impact Assessment , a guide to detecting injustices and perfecting algorithms from public authorities.

Question: The digital world is reproducing the inequalities of the real world. From what sources is data extracted for training algorithms?

Answer: You have to understand how artificial intelligence systems work. To teach them to distinguish a dog from a cat, we give them millions of images of each of those animals. We train them to learn how to identify. The problem is that those same systems, that software , are being used by police in the United States to predict crimes. They train the algorithm with photos of defendants, with data from neighborhoods with the most crimes or the most arrests. These patterns are biased; they reproduce stereotypes, and the artificial intelligence system takes them as the only truth. We are injecting them with our limitations, our way of marginalizing.

Q. Is this data collected randomly from the Internet?

A. Databases are used. One of the most popular and most widely used by technology companies is Image Net, which contains 13,000 images. 78% of them feature men and 84% of them feature white people. These are the benchmarks for any system trained with that tool. The way we label images is closely related to our culture and our social construct. Image Net was created by compiling photographs from Yahoo News between 2002 and 2004. The face that appears most frequently is that of George W. Bush, who was the president of the United States at the time. Today, it's still one of the most widely used databases. Artificial intelligence systems appear neutral and objective, but they aren't. They tell you a very particular version of the story.

Q. Which companies are interested in allocating resources to analyze these biases?

A. We've done it at Microsoft. In our study "Man is to Computer Programmer as Woman is to Homemaker?" we found that men are often associated with professions like politicians or programmers, and women with models, housewives, mothers... By analyzing hundreds of texts, we extract these patterns, these social stereotypes that the algorithms then replicate. That's why if you search Google Images for the word "doctor," you'll see photos of men in white jackets. If you type "nurse," you'll only see women in hospitals. When people see that, the most basic forms of bias are automatically reinforced. We need to start questioning how these systems are built.

Q. In Europe, it's not yet common for governments to use AI for decision-making. What impact is it having in the United States?

A. Last March, the media reported on how the government is using an algorithm to decide when a person should receive home care. Suddenly, many of those benefits were cut off, and elderly people who had been receiving home care for years were left without it. What had changed? The algorithm didn't take context into account and made bad decisions. No one had evaluated the system to see how many people had been left out. It was a scandal in the United States. It's an example of a system implemented without sufficient research. People with fewer economic resources and lower educational levels are the ones who are suffering first.

Q. Should governments make these algorithms public?

A. In one of the reports we published last year at the AI Now Research Institute , we made a crucial recommendation: that governments stop using closed algorithmic systems. They should allow independent experts to audit those formulas to detect weaknesses and biases. That's very important to ensure equal opportunities. We realized that up until that point, no one had published any research on the topic; there was no guidance. We formed a team of experts in law, engineering, computer science, and sociology, and we developed a mechanism to help governments develop a transparent system that allows citizens to know the details, whether their data has been processed correctly. If not, they will never know how a decision that directly affects their lives, their daily lives, was made.

Q. Have you already tested your anti-bias method with any administration?

A. We're testing it with New York City Council; it's the first city in the United States to implement it. We're measuring how algorithms affect citizens. We've also presented it to the European Commission and to Spain, where the first report on the consequences of AI, commissioned by the Ministry from a committee of experts, will be published in a month. I hope that if the change of government finally occurs, it will go ahead (this interview was conducted before the motion of censure against Mariano Rajoy ). Europe has arrived late to the game and that's why it needs to learn from the mistakes of the United States and China, countries where the application of AI to public decision-making is more advanced.

Q. And should companies like Facebook be required to make them public?

A. Looking at Facebook or Google's algorithms wouldn't help us. They're massive and complex systems with hundreds of thousands of algorithms operating simultaneously, and they're protected by trade secrets. Governments aren't going to use those algorithms; they're going to create public systems, and that's why they must be open and transparent. Maybe not for the general public, but certainly for independent commissions of experts.

Q. Artificial intelligence is increasingly present in companies' recruitment processes. What types of profiles does this technology affect?

A. In the United States, there's a new company, Hirevue, that recruits new profiles for companies like Goldman Sachs and Unilever using artificial intelligence. During the interview, they record and monitor 250,000 points of your face and then analyze your expressions. With this data, they determine whether you'll be a good leader or whether you'll be honest or not. They also study the tone of your voice and identify behavioral patterns. We can't assume we know what someone is like from their expressions; there's no scientific basis for it. Phrenology became popular in the nineteenth century, which was based on deciphering aspects of personality through facial analysis. Another danger is that companies are looking for people who resemble their current employees, and the impact this has on diversity is tremendous. They're creating monocultures.

Q. Do you think the time has come to debunk some of the beliefs about artificial intelligence, such as the idea that machines will be able to become conscious? How much harm are some gurus doing?

A. It's a terrible distraction from the real problems that AI is creating today. Typically, it's the richest and most powerful men in Silicon Valley who most fear the Singularity, the hypothetical machine rebellion, because they have nothing else to worry about, nothing else to be afraid of. For the rest of us, our fears are about how I get a job, how I make ends meet and pay my rent, or how I pay my health insurance. To think that machines will have feelings is a misunderstanding; it's having no idea how human consciousness works, which is impossible for a machine to replicate. We have bodies, very complex connections, which are not just brain impulses. We are bodies in a space, living in a community and in a culture. People see the word "artificial intelligence" and think we're creating human intelligence, when what we're actually doing is designing patterns for recognition and automation. If we called it "artificial automation," the debate would change completely.

Ana Torres Menárguez Ana Torres Menárguez

El Pais, Spain