Ditto AI
Blog

Can AI explain itself?

Posted by Ditto on Jun 19, 2019 8:55:00 AM

The best decision we might ever make in business is to let artificial intelligence make our decisions. But, without the right regulation, transparency and accountability in place, it could also be the worst decision.

AI is already revolutionising every aspect of our lives. From finding train times to making cups of coffee, it’s changing how we think and feel and saving us both time and money.

But when it comes to big decisions – like prescribing drugs to patients at a hospital – can AI technologies make them for us? And even if it can, can we trust it?

Lifting the lid off black box of AI

How artificial intelligence makes a decision is still shrouded in mystery. We know that it analyses data and that it operates through trial and error, but, because of its sheer complexity, we can’t always understand its workings.

AI can increase bank's revenues by as much as 30 percent text-only CTA

When it comes to important decisions like mortgage approvals or legal decisions, then, this causes a problem. As explained by Shefaly Yogendra, our Chief Operating Officer:

‘Customers have a right to ask ‘how did you arrive at this outcome? If you rejected my loan application, why did you do it?’

Article 22 of the GDPR gives individuals the right to object to decisions made by an AI, which puts decision-making power back into the hands of humans. Without this autonomy, AI technologies would have decision-making power over humans.

But, while this is a step in the right direction towards accountable AI, it doesn’t make it explainable.

Is the learning curve with AI technologies too steep?

When we consider the intelligence of AI compared to humans it paints an interesting reality. In October 2015, Google’s DeepMind created an AI that defeated the world champion in the strategy game ‘Go’. A few years later, Facebook had to shut down its AI bots because they created their own language to communicate with one another.

To this day, we struggle to understand and explain how these AI technologies performed these activities. An overwhelming amount of data was fed to these machines, and their analysis was too quick and too complex for humans to dissect.

Making AI explainable, then, requires less data and slow processing times. Humanity operates at the speed humans can process information. Any faster and we begin to enter a realm outside our comprehension. For AI to show us its workings, it must operate at the speed of humans, and at a level we can understand.

 

Artificial intelligence is only as good as the data you feed it

When we input data into an AI technology, we run the risk of ‘garbage in, garbage out’. Sure, AI learns from itself to improve, but it can only learn from what it knows, and that knowledge is provided by a human.

For example, an MIT study found that facial-recognition software from IBM failed to identify the faces of black women 35 percent of the time. Rightly, this failure to identify is racist. But AI isn’t conscious, and therefore, it can’t be racist, right?

Right. However, a human feeding data to the AI can be. In short, the dataset given to the facial recognition software was skewed, and consequently, the results were bias.

To counter this, perhaps a rigorous scrutiny and algorithmic auditing process is needed to ensure accountability, or perhaps we have to introduce symbolic AI principles.

Ditto and symbolic AI

Here at Ditto, we’re using symbolic AI to develop technologies that use natural‑language concepts to build large‑scale knowledge bases that map how different terms relate to each other.

In finance, for example, symbolic AI would recognise that ‘principal’, ‘interest’, ‘income’ and ‘default’ are all factors in making a loan decision. Based on this, we can begin to explain the reasoning behind an AI‑based decision.

If, for instance, an AI is analysing a loan application, the system could decide to reject the applicant and also tell the bank that it was doing so because one factor (income) couldn’t support another factor (interest payments), thus, making it explainable.

To find out more about how Ditto is making AI explainable, read our blog or contact us here.

AI can increase banks' revenues by as much as 30 percent wide CTA

Topics Explainable AI Black box AI Future of AI