Home Frontiers Machine Learning, Artificial Intelligence and Bigotry?

Machine Learning, Artificial Intelligence and Bigotry?

310
0
SHARE

An unintentional shock

In June 2015, Google’s Photos app identified photos of black people as “gorillas.” A Brooklyn-based programmer Jacky Alcine shared a screenshot to Twitter wherein the Photos app had categorized Alcine and a friend of his in a photo as “gorillas;” both Alcine and the friend are black.

“What kind of sample image data you collected that would result in this son?” Alcine tweeted along with the screenshot.

Not a first instance

The application, launched in May 2015, utilized image recognition software and machine-learning “to recognize people, places and events on its own” according to USA Today. Google’s head Google+ architect responded swiftly to Alcine, saying “This is 100% not OK” and promising that the problem would be corrected.

Similar situations have arisen in which other marginalized communities are faced with stereotyping and discriminatory computer programs. On September 12th, the lead for a New York Post article read: “A viral study that revealed artificial intelligence could accurately guess whether a person is gay or straight based on their face is receiving harsh backlash from LGBTQ rights groups.”

Standford University researchers were claiming that their study found that an Artificially Intelligent program “correctly distinguished gay men from straight men 81 percent of the time and 74 percent of the time for women.”

A personal perspective

I remember seeing that article in my Twitter recommendations when it came out and shaking my head, scrolling past it without opening it. At the time, I couldn’t articulate to myself why I was reluctant to read it.

But now, I’m going to try.

As a queer woman, it just felt like one more situation where I was being “othered”—marveled at like a zoo animal, “identified” like a criminal or a malignant tumor.

It’s not the worst treatment I’ve had as a lesbian or as a woman, but it belongs in the same category of experiences, ideas and treatment that keep marginalized communities marginalized.

What does it mean that I can be “picked out of a lineup” by a computer program and identified as queer? Does it mean that my government, now helmed by some pretty severe homophobes and misogynists, could tag me as some sort of undesirable in an extreme, hopefully only science-fiction dystopian story?

Hopefully, it’s nothing that extreme; but, that’s why I’m writing this article.

These two cases, and the many others like them, are perfect examples of why we must be extremely cautious going forward and developing these marvelous technologies.

The culprit

These situations are one of the clearest reminders that AI and ML technologies, while robotic, automated, and based in computer intelligence, are still developed by humans and therefore subject to human biases, due to the implicit feasibility of computationally judging a book by its cover. The act of picking training data is inherently corrupted by human-supervised machine learning.

The ability of a computer to “identify” a “gay person” is interesting, and points to the possibilities this technology could offer our world; but, we do not yet live in a world where being gay, lesbian, transgender, woman, or person of color is merely a neutral category to be identified in.

In many parts of the world and still in many parts of our country, being outed as queer puts people at risk, “especially in brutal regimes that view homosexuality as a punishable offense.” That particularly study was further problematic because its study pool only consisted of Caucasian people from dating websites. One critic wrote: “Technology cannot identify someone’s sexual orientation. This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community.”

Possible social impact

The study, and therefore the technology, was exclusionary, and it “identified gay people” of only of a certain subset of gay people. Beyond that, the question has to be asked: what parameters was the technology taught that led it to come to the conclusions it did? Machine learning does not occur in a vacuum, meaning that those who teach, or rather program, these applications and machines intrinsically imbue them with their human biases in some way. Could this be a result of bad training data?

This is what Oren Etzioni argues in a New York Times article about the balance between danger and benefit when it comes to Artificial Intelligence specifically. As these technologies develop, we have to be prepared to regulate them as we do human beings and as we do with the other machines we use like cars. “An A.I. system must be subject to the full gamut of laws that apply to its human operator…Simply put, ‘My A.I. did it’ should not excuse” behavior that is illegal or, in this case problematic and potentially damaging.

An appalling example

There was something in the programming of these applications that caused them to identify black people as gorillas and to allegedly distinguish non-heterosexual people on sight. That “thing” did not get there by itself because these are human creations learning from human ideas.

One of the ways marginalized communities, like people of color, women, differently abled people, and people in the LGBTQ community, are kept down and disadvantaged is through the stereotypes it seems the programs mentioned above may be based in. These stereotypes could also be of whites, and other individuals who may constitute of the sociological majority. “People perform poorly in situations where they feel they are being stereotyped” according to researchers in Toronto. Imagine that not only your country or local community views you as some stereotype or a number of them, but that you are also viewed that way by machines and robots, these non-human beings and programs which are supposed to be non biased.

For many the Internet is an escape, a place where it is easier to find like-minded people and escape the stereotypes and biases that can do everything from garne scowling looks to literally threatening your life.

The qualitative measurement of the wage gap

In 2016, Richard Sharp wrote about the gender gap in the technological industry. “Women represent around 20 percent of engineering graduates, but just 11 percent of practicing software engineers” he wrote. He then went on: “Unconscious bias is one of the primary drivers of this disparity” stating that many companies had been actively seeking to minimize and reverse their potential unconsciously biased hiring tactics. “It’s fair to say,” he argues” that [their] machine learning algorithms need [the training] more.”

Sharp uses an example that shows how biases are already affecting theoretically unbiased algorithms in Machine Learning programs; one study last year revealed that more men than women were shown advertisements promising to help them make more than $200,00 per year. “A gender bias was clear.” Ads associated with the word “arrest” were shown to more black than white users as well. These programs were not “trained” “taught” or even programmed to be biased. They may have been programmed using a mis representative population set, meaning that programmers have to be extremely particular when giving these programs data sets to learn from.

Not all doom and gloom

The thing is, Machine Learning and Artificial Intelligence are not intrinsically bad technological advancements. They offer not only the opportunity to keep up with the rest of the world, benefiting our economy among other things, they also offer conveniences and access in ways we are just starting to be able to imagine.

Amazon Go is essentially a grab-and-go retail store, where you do your shopping in the app ahead of time, walk into the store, grab them and walk out. Amazon is currently working on installing this concept into the real world, fresh off the heels of purchasing Whole Foods earlier this year. It is not yet clear how they will monitor the integrity of such a system, but this type of technology is just one example of the revolutions we are going to see in our world.

The new iPhone is going further in security measures by utilizing facial recognition software that will make it even more difficult for someone who is not the phone’s owner to access the phone.

In Japan, taxi companies are elevating their drivers’ business by utilizing AI/ML in a way similar to, but exponentially more powerful than, a seasoned drivers’ intuition. After years on the job, a driver has an intuitive understanding of where and when they could pick up the most drivers. With Artificial Intelligence and Machine Learning the “system predicts future taxi demand and directs drivers to high potential locations.”

These advancements have the potential to benefit every aspect of our lives. These programs will add convenience but also access, increased security, increased earning and more.

That is why this is not an article calling for a cease and desist when it comes to developing AI and ML. We just need to make sure there’s strong regularization going on as we test and improve these technologies; we have to make sure we’re not over-fitting facial recognition programs for example.

A technological solution: Deep Unsupervised Learning

There has been a recent breakthrough in the field of machine learning science, whereby a software can infer without guidance, the objective structures in data that it sees. Unsupervised machine learning is not new, and in fact is a fancy new word for Statistical Density Estimation. However the breakthrough comes from the “Deep Learning” aspect. There are key differences inherent to this tech that could eliminate the role of human-caused bias:

  • The information learned from data is not shallow information. Deep, structures unseen to the human eye may be found.
  • The learning does not occur at the hand of any human. It is created in a way to be randomly initialized into some random state of knowledge, and then learns on its own.
  • It is highly effective. It has made commercial robots a scientific possibility.

We are already seeing biases affecting already marginalized communities negatively as Machine Learning and Artificial Intelligence become bigger parts of our world. Amazon Prime has faced backlash for not giving access to its “same day delivery” features to predominantly minority neighborhoods, and so on.