Why ethical AI is nonsense

It is a well known fact that systems built with A.I. technology are often showing unethical behavior. They can be racist, sexist or otherwise biased. The problem are not the algorithms. They are just applied mathematics and therefore quite incorruptible. But the models built with these algorithms are trained using data created by humans. And this is where the bias comes from. Humans behave often unethical and is is therefore not surprising for A.I. to show corresponding behavior. If you train a natural language processing A.I. on a text corpus which contains racist views, it is obvious that the resulting A.I. becomes racist too. Responding to this challenge and trying to save A.I. technology, scientists and engineers have started to develop various methods to „debias“ A.I. models. I believe the systems resulting from such efforts can be dangerous and this article is about why.

A well studied example of how bias is introduced into A.I. models are so called „word embeddings“. To be able to process language, words need to be translated into vectors of real numbers. The algorithms used to achieve this (like „word2vec“) need large amounts of text data to train on. It has been shown already years ago that these word embeddings often convey racial and gender stereotypes. It also has been shown, that NLP A.I. systems built on such word embeddings show problematic behavior too (i.e. CV analysis software used in recruiting which is systematically favoring men over women).

Scientists had many ideas how to fix such broken word embeddings. There are purely mathematical methods (e.g. using linear algebra). Other algorithms try to identify the parts of the training data which are causing the bias and remove or modify them.

But there are fundamental problems with this methods:

  1. They might work only superficially
    Some methods seemed to work in the sense that they reduce an abstract metric for the bias. But it could be shown later that the bias was actually still present in the word embeddings and could have a negative influence on the A.I. created using them. This is dangerous, because it gives us a false sense of security and there is no guarantee that this kind of thing will not happen again with more advanced future methods.
  2. They fix only what we are aware of
    Even if word embeddings could be debiased with respect to gender bias, they might still be evil in many other ways. Unfortunately there are myriads of ways in which humans can be evil. A large enough language corpus contains the full spectrum of human behavior and opinions. There is simply too much possible error to be covered by debiasing algorithms.
  3. They leave us at the mercy of experts
    Such debiasing methods are mathematically challenging and are only fully understood by a handful of experts. Normal people are not able to assess the properties and quality of these methods. A public discussion about their performance and problems is not possible.
  4. They lead to unrealistic expectations
    If we expect A.I. to be ethically unfallible we start to believe in a false god. We start to delegate ethical decisions to engineers and machines which can, I believe, only lead to disaster.

I suggest a radically different solution for the problem of A.I. bias:

  1. Give each A.I. an individual „face“
    When we meet people, we can „calculate“ some statistics about who they might be from their appearance and their behavior. We unconsciously know how the history and background of people can influence the way they act and think. We must allow our intuition to work for A.I. too. This means that we need to implement laws which force companies to disclose on which kind of data their A.I. was trained. Then we can guess with what kind of A.I. „personality“ we are dealing. If an A.I. was trained on classical literature then we can expect it to be racist („Robinson Crusoe“) and ignorant about the dangers of smoking.
  2. Teach everybody to never fully trust an A.I.
    
We already teach children not to trust strangers. We should teach them the same regarding A.I. from early age. A.I. cannot be better than humans are, because it is trained on our past. Therefore A.I. will always lag behind us.
  3. Never let A.I. make certain decisions without human supervision
    This is a consequence of 2. We don’t let the company accountant handle all the accounts alone without auditing. We know that as a human being he might be tempted to channel off some money and because of this we implement control structures. We should do the same with A.I. making critical decisions. A.I. should never decide alone which job applicant gets invited to an interview. Such applications of A.I. must be regulated by the law in the same way we have accounting regulations today.
  4. Make use of A.I. to become more aware
    The mentioned word embeddings can also be used to study and demonstrate the inherent racism (for instance) of our society (this is a growing area of research). I believe this is a big chance A.I. is offering! A.I. could actually show us who we really are.

A.I. will soon make us humans very powerful. We better get prepared for this soon, as it should be us who control the machines and not the other way round.

We should not try to create gods so that we can remain fools. We should strive instead to improve ourselves.

(You can play with word embeddings on my website dissected.it )

Image: Shutterstock / Nadja Snopek


Follow me on Twitter to get informed about new content on this blog.

I don’t like paywalled content. Therefore I have made the content of my blog freely available for everyone. But I would still love to invest much more time into this blog, which means that I need some income from writing. Therefore, if you would like to read articles from me more often and if you can afford 2$ once a month, please consider supporting me via Patreon. Every contribution motivates me!