CRASHCOURSE SEVEN. TRANSPARENCY & AI

About the crashcourses  

There are 10 online crashcourses. These crashcourses are linked to the Technology Impact Cycle Tool (tict.io). This free online tool, powered by Fontys University, helps you design, invent, deploy or use technology that makes a (positive) impact on society. The tool offers quick scans, improvement scans and full questionnaires. The tool consists of 10 different categories.

You get the most out of the tool when you are informed on the different categories and, even better, inspired about the categories. That is the goal of this crashcourse: inform you and inspire you on TRANSPARENCY, so you are better equipped to assess the impact of technology.  All crashcourses take exactly one hour to compleet.

About this crashcourse
This online crashcourse is number seven: transparancy & AI. In the Technology Impact Cycle Tool is a category considering transparency. In this crashcourse we are going to explore why it is important for technology to be transparant. We do that by concentrating on AI: Artificial Intelligence. The reason is for this is that we like to talk about AI. AI is cool for now, but also that new AI-technology is often opaque (a black box) and results are hard to explain. So AI makes a perfect case for the importance of transparency. This course, like every course, has one mandatory assignment to help you understand. During the course we will offer all kind of further optional suggested reading, watching and optional assignments for those who crave for more!

The goal of this course is to educate you! To inform you! To inspire you! To entertain you! To dazzle you! To make you think! That is why we did not design a boring crashcourse for beginners, or a basic course with just theory. We cherry picked our way through the topics. We do not want to be complete, we want to interest, inform and inspire you! If you think we failed you, no problem, we are open for improvements. Just sent us an e-mail on: info@technofilosofie.com.

Some time management directions 
Again: it will take you approximately one hour to complete this course. This course consists of text, articles, videos and assignments. Every section lists reading time, viewing time and assignment time, so you can plan accordingly. If it takes longer than one hour, maybe that means your slow, maybe it means we calculated poorly. You figure it out yourself.

ONLINE CRASHCOURSE SEVEN. TRANSPARENCY (& AI)

This 60 minute online crashcourse consists of the following sections:

  1. About transparency in technology (3 minutes);
  2. An introduction in AI (16 minutes);
  3. The Black Box of AI (14 minutes);
  4. Towards transparency (26 minutes);
  5. In God we Trust (a mindblowing 1 minute read)

ABOUT TRANSPARENCY IN TECHNOLOGY

Reading time: 3 minutes

We like our technologies and the companies that make our technologies to be transparent. We like it if it is clear how things work. There are a lot of ways that technology and the companies that make our technology can be transparent. Let’s look at some examples:

Shortcomings. We like to know it beforehand if a technology has shortcomings. Of course if we read manuals, we read about the shortcomings of technology. This often has a legal reason. For example if you buy a floating device (let’s say a Pink Flamingo) the manual will probably say that it is not a life saving device and you should not use the inflatable pink flamingo to cross the Atlantic. It’s Instagrammable, but not advisable. But do we see the same ‘honesty’ with our digital technology? Does FitBit tell you that there sleeptracking is not really accurate? Or the way they count steps? Does your smart brush tell you that it is not optimized for your teeth? Is a weatherapp honest about the trusthworthiness of their rain radar?

Do you think digital technology is transparent about their shortcomings?

Business model. We like to know beforehand which business model a company uses. How does the company make money? If you buy a smart vacuum cleaner, you may think that the company makes a profit on the 500 dollars you paid for the product. True. But wouldn’t you also like to know that the company makes money on collecting data on your living room map, furniture and your usage of the vacuum cleaner? Do you want to know it when or why you are seeing ads, or newsfeeds, or posts? And who is paying for it?

Do you think technologie companies are transparent about their business model?

Communication. We like it if it is easy to communicate with a company that makes a technology? Can you ask questions? Is it easy to find more information? Can you file a complaint? Maybe call someone? A helpdesk? A lot of Big Tech companies are unreachable even for governments let alone private persons. If you are banned from Twitter, who are you going to call?

How does it work? We like it if the company is transparant about how the technology works or is programmed. Why do you see that certain post in Facebook? Why is Tinder showing you that girl or boy? Why is Uber sending you that driver? And so on. In crashcourse five we saw that platforms are often a wordview translated into code. Do we use our technology or does our technology uses us? Why does Booking.com that tell me that 6 people are watching the same room? Is that really true? We like our technology to be honest and to be transparent about the way it operates. Often that is not the case.

The last question – how does it work – is especially important in Artificial Intelligence and will only become more prominent in the future. Does Google still know why YouTube is recommending certain videos to you? Does Facebook understand their own algorithms that are selecting your newsfeeds? What if advanced artificial intelligence becomes more and more a black boxes, to complicated to understand? What if in a few years AI will decide that you are not fit for the job? Or for a education? Or an insurance? What if the computer says NO or YES, but nobody knows why? What if new AI – systems need to be opaque to be accurate? What if transparency leads to worse results?

Those questions are very important and we will come back to them in the next sections of this course, but first we are going to an introduction in AI and machine learning.

AN INTRODUCTION IN AI

Reading time: 4 minutes / viewing time: 11 minutes

Artificial Intelligence. AI. Neural Networks. Deep Learning. Machine Learning. You hear people talking about AI all the time, but AI has been around since 1951. First, let’s look at the history of AI in a short, animated video (3 minutes).

So, artificial intelligence has been around since 1950. That is a long time. A calculator is artificial intelligence. The chess computer that Garry Kasparov defeated in 1997 (Deep Blue) is artificial intelligence. However, most people do not consider their calculator as artificial intelligence. That’s probably because a calculator can’t learn. The same goes for a chess computer from 1997 that cleverly makes next moves, but cannot learn from its mistakes. These computers can be very powerful and smart, but they basically run an traditional algorithm.

Watch this video explaining algorithms (1 minute).

The important development from the recent years is therefore that we (I say ‘we’ but of course I had nothing to do with it) have figured out how to program using neural networks. We have learned how to program machines, that can learn themselves. That is why a lot of people enthousiastically state that the AI – winter is over.

Let’s watch a short video on the reasons the winter is over (2 minutes).

It is important that you understand how machine learning works, because that is key for the wave of the future. This video below gives a great explanation on the concept of machine learning.

Machine learning is of course way more complicated, but that is not very important to a technophilosopher. The central idea is that computers can learn. If you define a certain desired output and make sure there is enough input (training data) the computer starts to learn from its mistakes and gets better.

To get an impression of how impressive this is, let’s take a closer look at the aforementioned example from the chess world. In 1997, Deep Blue, IBM’s chess computer, beat chess grandmaster Garry Kasparov. But that was the only thing Deep Blue could do. You could have easily won a game of tic tac toe. 1997 was followed by a short period in which artificial and human intelligence worked together and played against each other. We called those teams Centaurs. It is often used as an example of how robot and human can work together. Often the term cobots is used. However, what is often not told, is that Centaurs hardly exist anymore, because the computers are too good and people are therefore irrelevant. So the chess computers play against each other and people hardly play a role anymore (a bit like Formula 1). A few years ago the best chess computer ever was Stockfish 8. It was very powerful, but still classically programmed by humans. Just like Deep Blue, it was mainly a computer that had a lot of computing power, and so could calculate a lot of scenarios very quickly and choose the best move.

But recently this computer played against a system that was neurally programmed, namely AlphaZero. This system, a machine learning system, had never played chess before, but knew the output, namely: winning. So they trained the system by playing millions of chess games against itself and within 4 hours it managed to win 100 of the 128 games from Stockfish 8.

That’s impressive.

Below a short example (1 minute) of an AI-system playing breakout and finding a solution the programmers of the system had never heard of.

In exponential times machines AI gets more powerful really fast. There are two types of artificial intelligence. Weak AI which is good in one thing, like driving a car or playing chess or recommending things on Netflix or recognizing your voice commands and strong AI, which can do many things. Weak AI is everywhere, strong AI is nowhere to be found. Yes, there are some AI-systems that can play a multitude of arcade games, but that is still a long way from the humanlike robots we know from the movies.

Still there are people that belief one day (soon) AI will become more intelligent than humans and then they will build their own AI, and then we are truly fucked or blessed depending on what you belief. On the other hand we can also find enough reasons to believe another AI winter is coming. If you are interested and want to know more, check the further reading section.

Further suggestions:

Key Take Aways:

  • Since Machine Learning has taken off in the field of AI, progress has been impressive;
  • Especially weak AI, that can do one thing really good, is popping up everywhere;
  • Strong AI is rare and not so strong;
  • Predicting is hard, especially the future.

THE BLACKBOX OF AI

Reading time: 5 minutes / viewing times: 9 minutes

In this section we take a closer look at how machine learning works, and what it means for transparency.

As we have already stated, AI is in a lot of things. All around us are already (advanced) forms of artificial intelligence (plain old algorithms, machine learning, deep learning, neural networks). Think of everyday applications such as Google (and Maps), to the newsfeed from Facebook, the trending topics on Twitter, the recommendations on YouTube and the surveillance cameras at Schiphol.

A good example is Tinder. It uses – among other things – a technique called clustering. Tinder used to work with the elo score (nowadays they are a bit more fuzzy about how the algorithm works). Elo score is a term that originates from the chess world. A little bit nerdy, but Tinder was not invented by people who did well in the pub. If you swipe someone with a higher score, and you are swiped  back, your score will go up and vice versa. In this way you are slowly assigned to people with the greatest chance of a match. However, it also means that if you only see ugly people on Tinder, then those are the people who find you attractive!

The most interesting development, as we have said a few times before, concerns so-called neurological networks. They consist of three layers:

  • An input layer;
  • A hidden layer;
  • An output layer.

You call such a network an artificial neurological network because it is (somewhat) built like our brains. You have synapses and neurons and feedback and training. There are many variants and there is a lot of technology behind it, but we try to explain it as simply as possible. Suppose you want to “train” such a network to recognize cats and dogs. Then you label millions of pictures with a ‘dog’ or ‘cat’, and then feed them to the network. The network will then “look” at the pixels. Do I see whiskers? With what probability? Do I see a coat? Green eyes? And thousands of other factors and the network comes up with all kinds of factors itself and then a conclusion (yes, a cat!) This conclusion is then validated and that is how the network learns. After some time and lots of pictures, it is almost perfect in recognizing cats and dogs.

And it can do it really fast (check this short video below!)

There are a few issues to note. First, thank God, I didn’t have to show my daughter millions of pictures of dogs and cats. She could recognize cats and dogs much faster. So does this kind of AI really work like our brain? Second, you need input. In other words, training data. The disadvantage of this data is that it often comes from the past and can therefore be biased. Simply put, if you want to create a new future, can you do that with a system trained on data from the past? Of course there are also a fantastic many wonderful applications, in which prejudices do not apply or are less valid. For example skincancer detection.

The above was quite interesting, don’t you agree, but artificial neurological networks really get interesting when we focus on the hidden layers. I always find it intriguing, that these layers are Always actually called “hidden layers”. No descriptive layers or definition layers, no: hidden. An artificial neurological network is not secretive about the fact that something mysterious is going on.

The best way to explain the importance of the hidden layer is by the example of the Gaydar. The idea of ​​a Gaydar (Gay Radar) is that gay men are better able to recognize other gay men than heterosexual men can. But if you test this by showing photos, the Gaydar turns out to be a myth. But maybe you can train an AI do to it.

First watch this creepy video (6 minutes):

Michael Kosinski, a data scientist, was the first to build a Gaydar AI. He used “cheap” AI, went a long way and wrote a warning paper about it, which caused a lot of unrest. The last year his conclusions have been under scrutiny, but the idea is clear. The most disturbing thing about the Gaydar is that you can not explain HOW the system does it. You can validate the outcomes and conclude THAT the system can do it, but if you ‘open’ the black box, the only thing you see is a spaghetti of millions of weighted connections that can  not be unraveled.

Suppose you should have a law (like the GDPR) that says that an AI – system should always be able to explain itself, to be transparent, than this particular would have to program things like: if the person has a moustache, is Gayfactor increases by, etc… The result would be that the system would become unreliable again. So there is a trade off between accuracy and transparency.

Neural Networks like to move in mysterious ways.

Finally watch this video on the most powerfull computer ever, that did not explain itself (3 minutes):

Further suggestions:

  • Life 3.0, THE book on AI by Max Tegmark;
  • Nick Bostrom talking (15 minutes) about the end of civilization;

Key Takeaways:

  • The most interesting development in AI are neural networks;
  • These networks need to be opaque (a black box/not transparent) to be accurate;

TOWARDS TRANSPARENCY

Reading time 3 minutes / viewing time 13 minutes /assignment 10 minutes

Okay, so in this crashcourse we stated a few things. One, we want our technology and our technology companies to be transparant. Two, advanced AI functions best if it is not transparent. So what now? Why do we care about transparency if the AI can do fantastic things? Why is that a problem? Well, there a few reasons:

  • First of all we humans like an explanation. If you are denied for a job, because the computer said no, you would like to know why. Also because it helps you adjust. Just ‘no’ feels unfair, and maybe is unfair;
  • Second, if a system is 99% right, it is still very unfair for 1%;
  • Third, AI often gets it wrong.

An illustration of AI getting it wrong is in the fun video below (11 minutes):

Okay, so to TRUST decisions made by AI we need to be able to EXPLAIN these decisions. This is called Explainable AI.

No watch this video (3 minutes) on what explainable AI is:

There are four important things to notice in this video:

  • ‘Simple’ AI models can be easily explained (things like decision trees);
  • But complex AI (neural networks, the things that can do cool things) are hard to explain;
  • The solution that is researched hints to building AI that can explain AI. Right, but who is going to explain the AI that explains the AI ?
  • There always should be a human in the loop. This sounds good, but also scary. I prefer the statement: there always should be a computer in the loop.

The need for explainable, transparent AI is very much depending on the scope of the AI. If the AI can determinate with an app if people in rural India with no easy access to a doctor have an eye disease, then 95% accuracy without explanation is a blessing. If you apply for a job, and a black box AI determines if you are accepted than 95% is a blessing for the employer, but can be a disaster for the applicant that is part of the 5%. If an opaque AI determines your sentence, than it becomes really sketchy.

But what do you think?

Mandatory assignment (1o minutes). Open the template (PowerPoint) and read the case for a black box AI with a lot of benefits. What do you think? Should the city of Eindhoven implement the AI as proposed?

Further suggestions:

  • A PDF on automatic decision systems and how to mitigate the risks ( 8 pages);
  • The website from the AI NOW research institute that works hard to help realize AI for Good;

Key Take Aways:

  • AI tends to make mistakes;
  • AI that is 99% right still can be very troublesome;
  • To trust AI it should become transparent, be able to explain itself, but that sounds easier than it is.

IN GOD WE TRUST

Reading time: one full minute

In this whole section we talked about transparency, AI and how AI needs to be transparent and explainable so we can trust the AI. But, is this really true? Does explainability and transparency really lead to more trust? What do you think? Do you trust the recommendations by Netflix? Do you think you would trust them more if Netflix explained them? Do you trust the newsfeed in Instagram? Or do you need an explanation? And, do you trust God?

God, moves in mysterious ways. He does not explain his decisions. He does not tell us why he took our loved ones. He doesn’t explain why  famine or war or suffering or cruelty is necessary. He is opaque. God is a black box, but maybe that is the reason that so many people trust him.

In God they trust.

So, maybe technology should move in mysterious ways to be trusted!

Further suggestions:

  • All 66 books of the Bible in one sentence.
  • A video of ten minutes with the common misconceptions on AI.

A VERY SHORT SUMMARY OF THIS CRASHCOURSE

Congratulations. You have completed crashcourse number seven, so you got a very small taste of thinking about technology and the importance of transparency especially in relation to advanced AI. An appetizer, if you want. Maybe you did some further reading, so you started on the soup. Good for you. Remember: transparency is important. We like our companies and technology to be transparent. However, we also like our technologies to work properly. With advanced AI we are entering new territory, can technology work properly and be transparent? Can we trust that technology that is opaque to make important decisions or do we need technology to be a black box, to trust it?