The best decision we might ever make in business is to let artificial intelligence make our decisions. But, without the right regulation, transparency and accountability in place, it could also be the worst decision.
But when it comes to big decisions – like prescribing drugs to patients at a hospital – can AI technologies make them for us? And even if it can, can we trust it?
Lifting the lid off black box of AI
How artificial intelligence makes a decision is still shrouded in mystery. We know that it analyses data and that it operates through trial and error, but, because of its sheer complexity, we can’t always understand its workings.
When it comes to important decisions like mortgage approvals or legal decisions, then, this causes a problem. As explained by Shefaly Yogendra, our Chief Operating Officer:
‘Customers have a right to ask ‘how did you arrive at this outcome? If you rejected my loan application, why did you do it?’
Article 22 of the GDPR gives individuals the right to object to decisions made by an AI, which puts decision-making power back into the hands of humans. Without this autonomy, AI technologies would have decision-making power over humans.
But, while this is a step in the right direction towards accountable AI, it doesn’t make it explainable.
Is the learning curve with AI technologies too steep?
When we consider the intelligence of AI compared to humans it paints an interesting reality. In October 2015, Google’s DeepMind created an AI that defeated the world champion in the strategy game ‘Go’. A few years later, Facebook had to shut down its AI bots because they created their own language to communicate with one another.
To this day, we struggle to understand and explain how these AI technologies performed these activities. An overwhelming amount of data was fed to these machines, and their analysis was too quick and too complex for humans to dissect.
Making AI explainable, then, requires less data and slow processing times. Humanity operates at the speed humans can process information. Any faster and we begin to enter a realm outside our comprehension. For AI to show us its workings, it must operate at the speed of humans, and at a level we can understand.
Artificial intelligence is only as good as the data you feed it
When we input data into an AI technology, we run the risk of ‘garbage in, garbage out’. Sure, AI learns from itself to improve, but it can only learn from what it knows, and that knowledge is provided by a human.
For example, an MIT study found that facial-recognition software from IBM failed to identify the faces of black women 35 percent of the time. Rightly, this failure to identify is racist. But AI isn’t conscious, and therefore, it can’t be racist, right?
Right. However, a human feeding data to the AI can be. In short, the dataset given to the facial recognition software was skewed, and consequently, the results were bias.
Ditto and symbolic AI
Here at Ditto, we’re using symbolic AI to develop technologies that use natural‑language concepts to build large‑scale knowledge bases that map how different terms relate to each other.
In finance, for example, symbolic AI would recognise that ‘principal’, ‘interest’, ‘income’ and ‘default’ are all factors in making a loan decision. Based on this, we can begin to explain the reasoning behind an AI‑based decision.
If, for instance, an AI is analysing a loan application, the system could decide to reject the applicant and also tell the bank that it was doing so because one factor (income) couldn’t support another factor (interest payments), thus, making it explainable.