Sundance and SXSW 2020: Coded Bias: Big Brother is Everywhere
Algorithms. To sell your information to large corporations. To add your bio-metrics to a wide scale surveillance system. To sell you more of what you like based on what you search on the internet. For facial recognition in order to buy your food and control your speech. Think this is something out of a Sci-Fi film? Think again.
With it’s premier at Sundance 2020 and it’s acceptance into SXSW 2020, writer/director Shalini Kantayya, boldly, with an effortless feel, walks us through the consequences of the white, male gaze on the lives of women and minorities.
A Simple Experiment that Changed Big AI
What started out as a fun, fictional college project for Joy Buolamwini became a critical finding in Coding. While working on a mirror that could overlay faces of inspiring celebrities onto the faces of others, Buolamwini found that the programs used for this application couldn’t detect her face. She put a generic white mask over her face and the algorithm found her immediately. She later discovered, that the algorithms used to create Artifical Intelligence (AI) programs couldn’t detect race or gender accurately.
This led Buolamwini down a path to change coding and algorithms permanently. AI was first formed in 1956 at Dartmouth College and, as usual, the only ones working on it were white males. No surprise, as humans program computers and AI, that white male bias was also coded into it.
Buolamwini set out to research the major companies that create facial recognition software from Amazon to Google, and what she found in all of them was disturbing. None of them could recognize any faces with 100% accuracy, with the exception of white, male faces. Women and women of color recognition scores were exceptionally lower in all of them.
After Buolamwini’s research was released, IBM reached out to her and brought her in for internal research. They found that they, in fact, did have implicit bias in their algorithms and they deliberately fixed it to make the software more accurate. Buolamwini was inspired to create the Algorithmic Justice League, her own group of tech specialists and mathematicians to help keep AI developers accountable and their products equitable.
So How Does this Affect Me?
Algorithms are in almost everything today. That Google search button is a wonderful thing. It connects us to the world and people use it to ask questions about everything. However, Google’s algorithms learn from everything you search. It sends back information to you about your interests and sells products to you based on anything you’ve clicked. These algorithms give over enormous amounts of our information to a myriad of companies and also the government.
Algorithms are used by big companies for the hiring process. According to the film, Amazon attempted to use an algorithm to hire there and it was found to have excluded resumes with anything female, therefore, only men were hired. Men also make up the majority of their high profile positions.
Mortgage lending algorithms also have bias. Therefore the majority of people of color and those with lower incomes face widespread discrimination. Only those who pass the criteria programmed into the algorithms get the mortgages.
Algorithms are used in determining how much you pay for insurance and how much banks are willing to lend you in loans and credit cards. In 2019 it came out that the new Apple Card launched also had used discriminatory methods in the selection of people who would get the card. Husbands were found to be getting higher limits than their wives with equal credit.
The United Kingdom have been using CCTV cameras all around their cities. According to the film, they recently started implementing facial recognition software and cameras in order to identify citizens and visitors under the guise of protecting the public from known criminals. However, without regulation of any kind on AI and with the fact that no AI system to date 100% accurate, this steps over the line of human rights and right to privacy. Anyone who covered their face to the cameras were stopped by police and fined.
Dr. Cathy O’Neil, a mathematician and author of the book Weapons of Math Destruction claims to have found that math is being used “as a shield for corrupt practices”. Algorithms use historical data to make a prediction. She claims that the powerful own the codes so are able to manipulate them to suit themselves, not to help the unknowing general population and the general population cannot protect themselves nor use those algorithms against the powerful. There is no appeal system nor accountability.
AI is also being used in the “justice” system. Courts have used algorithms to “predict” behavior of inmates and whether or not they get released or how long they are on parole, based on their “risk of re-offending”, but the questions asked in the data for these programs discriminate against them due to things like race, parental involvement in the system and their zip code. Research by Pro Publica found that the algorithms used in the risk assessment were racially biased.
Not to mention the algorithms that social media data analysis companies use, as in the Facebook/Cambridge Analytica scam that used stolen information to try to change the way people voted in the 2016 U.S. Presidential Election. Manipulation by other countries to sway the U.S. election in 2020 is still of great concern.
China
In China, a country known as “communist”, though it seems to lean more toward a dictatorship, mass surveillance is an everyday thing. One has to agree to information sharing just to get internet service.
Wang Jia Jia, a resident of Hangzhou, the capital of China’s Zhejiang Province, lives practically her entire existence by her face. She is able to buy groceries, everything she shops for and even buy a soda at a vending machine with her face.
China uses a “social score” based on your behavior to keep people in line. It keeps people from talking badly about the government and also determines ones ability to use trains or planes. Your score also affects your family members scores, making anyone take pause in what they do and say.
Jia Jia claims that it makes everything more “convenient”. She talks about how, in deciding whether or not she wants to be friends with someone, the social score makes it easier to trust someone rather than relying on instinct and it also ensures that everyone behaves better. The dangers and myriad of consequences in this kind of rationale is inherently dangerous, but what else would she say?
The mass protests in Hong Kong last year started in regards to mainland China trying to implement more mass surveillance under the guise of looking out for criminals there, but the people weren’t having it. Hong Kong is run independently from mainland Chinese rules and the people there wanted to keep it that way.
They used laser pointers to confuse the surveillance cameras; broke other cameras and spray painted others black. The government, of course, pushed back, sometimes violently, but protests are still happening there to this day.
The film points out that many people feel better that they don’t live in China, but emphasizes that mass surveillance is happening everywhere, the US included. It states that the only difference between China and other countries is that China is “transparent” about it.
In All Honesty…
These are the faces of the founding AI creators. Those ruling and controlling the codes today probably don’t look much different. This is some scary shit, though I’m not at ALL surprised it’s happening. If a one sided coding system doesn’t bother you, it’s time to ask yourself why.
The truth is, privacy is a thing of the past. We can’t expect to have a global communication system with infinite options without some major flaws in it’s design. With that said though, that doesn’t mean it’s right. Our basic civil liberties are being exploited and trampled on for money and power.
Kantayya effortlessly stuffs more information into a 90 minute period in an engaging and fast moving pace. When I say effortlessly, I don’t mean that she didn’t put in the effort, she clearly did, but she put it together so meticulously that it looks and feels effortless.
From the diversity of the people she interviews to the amount of different ways she shows that Artificial Intelligence is used, she makes a strong and compelling argument that everyone needs to pay attention to. Not only do we need to be talking about this, but we, the people, need to be doing more about this. We cannot afford to live our lives in apathy and allow governments and the wealthy to control everything. It is literally a matter of life and death.
The story moves deftly from dilemma to dilemma without being overbearing or too judgmental. It presents the information from well studied experts in the field and lets the audience decide for itself what to do with it. The cinematography, visual effects and crisp editing make this documentary seem more like a television drama than a documentary, which makes it easy and fun to watch.
But make no mistake, this information is real. It is compelling and it is frightening. The lack of control that we have, nowadays, over our information is mind-blowing. Thanks to the groups of individuals who are fighting on our behalf for some oversight and accountability, we may have a chance before our lives turn directly into an Orwellian novel.
However, they can’t do it alone. There is more strength in numbers, as evidenced in the Hong Kong protests, and governments know it. They depend on our apathy and compliance to continue their surveillance and their silence, about what they are truly doing, is deafening.
People are going to jail because of algorithms. People are denied housing and insurance because of biased algorithms. Innocent people are being arrested because of inaccurate and bigoted algorithms. Elections all over the world are being bought through algorithms. The wealthy are staying wealthy because of the power of these algorithms. The film industry is signing on to using AI to choose what films do and do not get made. Women and other minorities already have a huge problem getting a film made in the racist and sexist film industry, let alone with one that depends on implicitly biased codes.
Use of AI is taking our right to think away. It is allowing computers—that inherently flawed humans create—to think for us, but codes have no way to decipher what is ethical or just. Maybe that doesn’t matter to us, when technology dangles the shiny new smartphones or the hot Alexa products, that we simply can’t live without, in our faces. But we must take a hard look at what are we actually trading for this futuristic automation.
It is imperative for us to see through what we want, in order to keep a vigilant watch over those who control our technology and, within that, our very lives.