CRASHCOURSE EIGHT. INCLUSIVITY

About the crashcourses  

There are 10 online crashcourses. These crashcourses are linked to the Technology Impact Cycle Tool (tict.io). This free online tool, powered by Fontys University, helps you design, invent, deploy or use technology that makes a (positive) impact on society. The tool offers quick scans, improvement scans and full questionnaires. The tool consists of 10 different categories.

You get the most out of the tool when you are informed on the different categories and, even better, inspired about the categories. That is the goal of this crashcourse: inform you and inspire you on INCLUSIVITY so you are better equipped to assess the impact of technology.  All crashcourses take one hour to compleet.

About this crashcourse
This online crashcourse is number eight: inclusivity. In the Technology Impact Cycle Tool is a category considering inclusivity. In this crashcourse we are going to explore why it is important for technology to be inclusive. We do that by exploring the relationship between technology and accessibility, gender and race. This course, like every course, has one mandatory assignment to help you understand. During the course we will offer all kind of further optional suggested reading, watching and optional assignments for those who crave for more!

The goal of this course is to educate you! To inform you! To inspire you! To entertain you! To dazzle you! To make you think! That is why we did not design a boring crashcourse for beginners, or a basic course with just theory. We cherry picked our way through the topics. We do not strive to be complete, we strive to interest, inform and inspire you! If you think we failed you, no problem, we are open for improvements. Just sent us an e-mail on: info@technofilosofie.com.

Some time management directions 
Again: it will take you approximately one hour to complete this course. This course consists of text, articles, videos and assignments. Every section lists reading time, viewing time and assignment time, so you can plan accordingly. If it takes longer than one hour, maybe that means your slow, maybe it means we calculated poorly. You figure it out yourself.

ONLINE CRASHCOURSE EIGHT. INCLUSIVITY

This 60 minute online crashcourse consists of the following sections:

  1. Accessibility & living forever (13 minutes)
  2. Some theory on bias in technology (21 minutes)
  3. Racial Bias in technology (5 minutes)
  4. Gender Bias in technology (16 minutes)
  5. Diversity in your team (5 minutes)

ACCESSIBILITY & LIVING FOREVER

Reading time: 4 minutes / viewing time: 9 minutes

Do you want to be the Six Million Dollar Man? Do you even know who he is? Do you want to become a better human? Live longer? Be stronger? Be smarter? Let’s start with a 5 minute video on human enhancement.

And follow it up with a 4 minute video on transhumanisme and – watch for it in the video – a very cool idea called utility fog.

Technology will enhance humans. The real interesting question, that is not asked in these videos, is: will everyone will get access to it?

Suppose in a few decades it will be possible to use (bio)technology to become smarter, live a lot longer, healthier and stronger, chances are that in the beginning this will be very expensive and only for the very rich. But, if the rich get stronger, older, healthier and smarter, maybe they no longer need us. This means accessibility to technology will be an important  topic in the future.

Today accessibility is also very important. Technology has a tendency to make things very affordable and accessible. There are more people with access to a smartphone then there are people with access to a clean toilet. Technology often is cheap and it often becomes more cheaper over time. However, it is still important to assess new technologies by their accessibility. There are a number of questions you can ask:

  • Does everyone have access to a technology ? Does the inequality between people increase because of your technology? For example, if you build an app that is really helpfull, but only available on expensive types of smartphones. Do old people have a smartphone? Or, if you start home education, but a lot of people do not have access to a laptop or a decent internet connection. It helps to think about people not having access to your technology and how to mitigate or accept that;
  • Second, some people have disabilities that make it impossible for them to use a certain technology. Luckily technology is very good at offering solutions for people with disabilities. From software that can read a website, to prosthetic arms and brain to computer interfaces.

Further reading:

  • A video from biohacker Peter Joosten on Longevity (longer living);
  • The book from Ray Kurzweil (from Calico – a Google daughter): Live Long enough to live forever;

Key take aways:

  • If you assess a technology it is important to look at the accessibility;
  • Unfair accessibility can increase inequality;
  • Technology can also make certain functionality more accessible;
  • When biotechnology and information technology keep merging and developing, accessibility could become one of the topics of the future.

BIAS IN TECHNOLOGY

Reading time: 2 minutes / Viewing time: 19 minutes

Okay, we all know humans are biased. Humans do not make rational decisions. Humans are the heros in their own stories. Their memory is distorted. Humans will bend facts so they fit in the way they view the world. Humans, can not be trusted. Not even real smart, objective humans like for example judges. There have been studies that seem to prove that judges are more lenient just after the lunch.

Hungry judges give harsher sentences.

So, in theory there is a lot of potential for objective computers you would think. What if we let AI make decisions? What if we have Artificial Intelligence sentencing people? What if we have AI hiring people? Or deciding if you people get a loan? An insurance? Then things would become fairer. Right? Or not?

Watch this talk in which documentary maker Robin Hauser asks these questions (12 minutes).

In her talk Robin Hauser shows that the problem with artificial intelligence often is the data with which it is trained. She shows that the machines are not programmed with prejudice, but that the lack of diversity in the dataset has unwanted results. If you train a system with biased data you get biased output.

The next animated video (7 minute) explains more in depth and (beware) on a more technical level how a bias in a artificial intelligence network works:

Bias in AI is a big issue, now and in the future. That is why we think it is important in this crashcourse to draw attention to it. But there is also bias in other technologies. For example a large pick-up truck can be biased because small people (often women) can not drive the truck, because they can not reach the pedals. We will see some examples in the next sections.

Further suggestions:

Key Take Aways:

  • In a biased world, every AI you train is biased too;
  • Bias is hard to avoid in AI;
  • Bias is also in other technology.

RACIAL BIAS IN TECHNOLOGY

Reading time: 3 minutes / viewing time 2 minutes

The best way, we think, to make you aware about racial bias in technology, is to present you with a series of examples. There are different categories.

Technologies that are racially biased, because the designers did not take different races into consideration.
For example, an automatic soap dispenser that does not recognise black skin (video 1 minute). Testing with a person with a dark skin was obviously not taken into consideration by the design team.

Or, the Nikon Coolpix s630, a camera that made pictures of Asian faces, and often asked: did someone blink?

Facial recognition software is increasingly being used in law enforcement – and is another potential source of both race and gender bias. In February this year, Joy Buolamwini at the Massachusetts Institute of Technology found that three of the latest gender-recognition AIs, from IBM Microsoft and Chinese company Megvii, could correctly identify a person’s gender from a photograph 99 per cent of the time – but only for white men. For dark-skinned women, accuracy dropped to just 35 per cent. That increases the risk of false identification of women and minorities. Again, it’s probably down to the data on which the algorithms are trained: if it contains way more white men than black women, it will be better at identifying white men. IBM quickly announced that it had retrained its system on a new data set, and Microsoft said it has taken steps to improve accuracy. Algorithms not able to correctly identifying people from a different race can be especially troubling when you think about for example self driving cars making split decisions.

Technologies that use algorithms that are racially biased
This is a very complicated field. Sometimes the AI is racially biased because of the programming, sometimes it is because of the data on which the AI is trained. We give some prominent examples:

COMPAS is an algorithm widely used in the US to guide sentencing by predicting the likelihood of a criminal reoffending. In perhaps the most notorious case of AI prejudice, in May 2016 the US news organisation ProPublica reported that COMPAS is racially biased. According to the analysis, the system predicts that black defendants pose a higher risk of recidivism than they do, and the reverse for white defendants. Equivant, the company that developed the software, disputes that. It is hard to discern the truth, or where any bias might come from, because the algorithm is proprietary and so not open to scrutiny. But in any case, if a study published in January this year is anything to go by, when it comes to accurately predicting who is likely to reoffend, it is no better than random, untrained people on the internet.

And then we have PREDPOL. Already in use in several US states, PredPol is an algorithm designed to predict when and where crimes will take place, with the aim of helping to reduce human bias in policing. But in 2016, the Human Rights Data Analysis Group found that the software could lead police to unfairly target certain neighbourhoods. When researchers applied a simulation of PredPol’s algorithm to drug offences in Oakland, California, it repeatedly sent officers to neighbourhoods with a high proportion of people from racial minorities, regardless of the true crime rate in those areas. In response, PredPol’s CEO pointed out that drug-crime data does not meet the company’s objectivity threshold, and so in the real world the software is not used to predict drug crime in order to avoid bias. Even so, last year Suresh Venkatasubramanian of the University of Utah and his colleagues demonstrated that because the software learns from reports recorded by the police rather than actual crime rates, PredPol creates a “feedback loop” that can exacerbate racial biases.

Some other prominent examples are Google identifying black people as gorilla’s after an image search. or what about this 30 second video about Google’s idea about three black/white teenagers:

Technologies that are coded in a such way that they can lead to racism
In crashcourse five (stakeholders & platforms) we talked about the fact that platforms are opinions or woldviews translated in code. So, for example if AirBnB shows the ethnicity of someone that wants to rent a room, then it could become harder for certain people to rent a room. There is a lot of research that shows that it is harder for an African American person to rent a (popular) AirBnB. There even is a NoirBnB. This does not automatically mean that AirBnB is racist, but the way it is designed can have these results. Does an Uberdriver see the ethnicity of is next customer? On kickstarter.com, do the best ideas get the most money? Or do the white men with ideas get the most money? On GoFundMe, does the best charity gets the most money? Or does the money go to the white little girls with the suburban parents?

These choices in technology by platforms that are used by millions or billions of people can not be taken lightly.

O yeah, and did you know that there were way fewer Pokéstops in minority neighbourhoods. That is strange, because holocaust memorial places where complaining all the time about Pokémons popping up.

Technology Mistakes
And of course, there can be honest mistakes, but with a big impact.

For example Google identified black people as Gorilla’s when doing an image search. Which was mostly painfull. However, in October 2017, police in Israel arrested a Palestinian worker who had posted a picture of himself on Facebook posing by a bulldozer with the caption “attack them” in Hebrew. Only he hadn’t: the Arabic for “good morning” and “attack them” are very similar, and Facebook’s automatic translation software chose the wrong one. The man was questioned for several hours before someone spotted the mistake. Facebook was quick to apologise.

In the United States there is a term called from the 1960’s called redlining. This term refers to the systematic denial of various services by federal government agencies, local governments as well as the private sector, to residents of specific, most notably black, neighborhoods or communities. There is also something called digital redlining. Digital redlining is the practice of creating and perpetuating inequities between already marginalized groups specifically through the use of digital technologies, digital content, and the internet.

Further suggestions:

  • Read about COMPAS and criminal reoffending;
  • Read more about PREDPOL and predictive policing;
  • Watch this Ted Talk from Joy Buolamwini, in which she talks about algorithms not recognizing different races (15 minutes).

Key Take Aways:

  • Technology is often racially biased;
  • This can be by mistake, by ignorance or by design;
  • Large technology platforms can fuel racism even when they do not intend to do so;
  • Therefore being aware of the potential racial bias in a technology is very important.

GENDER BIAS IN TECHNOLOGY

Reading time: 1 minutes / viewing time: 5 minutes / mandatory assignment time: 10 minutes

When Viagra was researched there were promising first results in treating period pain. However, when it became clear that there were other ways to use Viagra, funding shifted. The world is not designed for women. The world is designed by men for men. This also is true for a lot of technology.

Take this example on cars (5 minutes):

Mandatory assignment (10 minutes): Do a Google search and find 8 compelling examples of technology that was NOT designed for women and can have real serious consequences. The results can be entered in this powerpoint template.

Just like with racial bias, genderbias is best explained by all the examples. So, go look for them yourself, do the assignments and I will provide you with examples of my own.

Further suggestions:

  • The book invisible  women by Caroline Perez;
  • A more elaborate video by Caroline Perez (12 minutes))
  • A piece from Cathy O’Neil on Bloomberg about gender biased algorithms.

Key Take Aways:

  • Technology is often racially biased;
  • This can be by mistake, by ignorance or by design;
  • AI especially can be genderbiased, because it is trained with data from a gender biased time;
  • Therefore being aware of the potential gender bias in a technology is very important.

DIVERSITY IN TEAMS

Reading time: 2 minutes / viewing time: 3 minutes

Remember the video I made in Crashcourse One? The video about Nerds in Paradise? This video had a simple message: programmers are making software that is solving their problems! They are trying to change the world in a world they want to live in! And, if these programmers, designers and inventers are mainly white. And male. And geeky. Then chances are that we get a world that is designed for men like that!

That is why we need more diverse teams to design technology. This still is a big problem if you look the numbers. Technology is still a male, white dominated industry. However, more and more companies are aware that having a diverse team also means that you can drive innovation, find more customers and therefore make a larger profit. This video explains with some examples (3 minutes);

Further suggestions:

Key Take aways:

  • To make diverse, inclusive technology  you need diverse, inclusive teams!

A VERY SHORT SUMMARY OF THIS CRASHCOURSE

Congratulations. You have completed crashcourse number eight, so you got a very small taste of thinking about technology and the importance of inclusivity especially in relation to advanced AI. An appetizer, if you want. Maybe you did some further reading, so you started on the soup. Good for you. Remember: inclusivity is important. We like our companies and technology to be inclusive. Technology often plays a very positive role. Technology makes services cheaper and more accessible. On the other hand, technology often has undesired effects. Designteams are often not diverse enough to be aware of racial or gender bias. Also, advanced AI is often trained on data from a biased past. Also, a lot of structures in society are still biased. Technology can, when not carefully designed, fuel racism or enlarge differences. That is why being aware of the impact of a technology on different kind of people is so important.