In business, we know that unconscious bias makes teams less effective. This issue becomes amplified when you decide to bring AI into your ways of working. If your biases fundamentally affect the outcome of automated intelligence projects, then you transmit a human problem into a technological flaw.
That’s why it’s so important to integrate accountability into AI, so that decisions are transparent and traceable. You can’t fix the problem if you can’t find it. Still, it’s worth asking, how do you prevent these problems in the first place?
Garbage in, garbage out
Any parent knows that when you’re raising a child, their behaviour is a reflection of their surroundings, influences and experiences, good or bad. AI developers are also acutely aware of this fact, given that AI can provide a mirror image of our worst tendencies. When these prejudices come to light, they make for sensational headlines:
‘Amazon fired its resume-reading AI for sexism’ – Popular Mechanics
Last year, Amazon scrapped their recruitment AI after it was discovered it was biased against women. The historical data used to train the AI was mostly CVs from male applicants. As Amazon had predominantly hired men in the past, it learned that masculine traits, hobbies, attendance at male-predominant universities and so on were all positive traits. It penalised applicants that even used the word ‘woman’ in their CV.
The solution: tooling
Amazon is an example of when artificial intelligence algorithms are given biased data with which to work. To avoid this and empower teams to reduce bias, companies like IBM, Facebook and Microsoft are all developing tools to ‘weed out’ AI bias. Tools show engineers where there may be potential issues, empowering more informed decision-making.
Narrow fields of view
A 2018 McKinsey study found C-suites that are more diverse have a 33 percent chance of industry-leading profitability. In order to make an informed decision as a group, it helps to have a variety of perspectives present to analyse the problem from different angles. This gives a more nuanced and sophisticated analysis and – usually – a better result.
By analysing data with a limited viewpoint, you risk skewing results or missing something entirely. Archaeologists have been known to mistake bucket handles for crowns and developers sometimes forget left-handed people exist. For AI, a lack of diversity increases the chance for unconscious bias to creep into automated intelligence.
The solution: diverse teams
Automation – at its best – empowers human abilities. Tools that help people detect bias in AI are useful because you can find flaws, yes, but by reducing bias you also create new opportunities. You need diverse human perspectives to find out what these are.
For example, using deep learning techniques, Google managed to detect in eye scans the signs of diabetic retinopathy, one of the leading causes of blindness in the world. Then, an intern at Google also thought to ask the AI if it could determine the gender of the patient from the eye scans. It could with a 97 percent accuracy. Humans have a 50:50 shot.
That person inspired a whole new line of thinking. Google’s AI, it now turns out, can predict a person’s risk of heart attack in the next five years based on those same scans, and can gather far more information from a picture of your eye than our best doctors ever could.
It works both ways. Given the right start, AI can help us enhance diversity and inclusion, so we can build even better AI, and discover new and amazing things. We follow these principles at Ditto to keep our AI accountable. If bias is a virus, then diversity is the cure.