Explainable AI is the bright future of artificial intelligence for business. Automated decision-making has gotten a lot of bad press in recent years, mostly because people do not trust the technology behind it. A big part of that mistrust is due to ‘black box’ AI, which produces outcomes from the murky depths of data, using algorithms.
As any maths teacher will tell you, just writing the answers won’t get you the best grade. You have to show your working to get top marks.
What is ‘black box’ AI?
Deep learning networks are made of a huge number of connections and in the middle of the process sits a lot of data and the algorithms that interpret that data. ‘Black box’ AI is where we understand the inputs and we know the outcomes, but we don’t know how the AI has arrived from A to B.
For example, automated online diagnostics have been found to be frequently wrong or overly cautious. Ultimately, every headache could be stress or cancer, right? Part of the problem is lack of information on why the system suggested one diagnosis and not another. There isn’t enough clarity.
From healthcare to the military, AI is being used in life and death situations. Without transparency, fear of the unknown can mean potentially revolutionary technology is subjected to heavy legal ramifications, hindering progress. There does need to be the right legislation around machine learning to protect everyone – and everything – but based on facts, not fear.
Good news. It doesn’t have to be this way.
What is ‘explainable AI’?
Explainable AI, as opposed to ‘black box’ AI, provides transparency for the part of the artificial intelligence process where algorithms interpret data.
This means two main business problems are solved:
- Accountability – we know how an automated decision is reached and can trace the path of reasoning if needed.
- Auditability – we can review processes, test and refine them more accurately, and predict and prevent future failures or gaps.
You know how to use a phone. You know something about its workings, its sources, what it can do and what it can’t. But, if someone asked the average person to build a phone, they probably wouldn’t know where to start.
Just because something is ‘explainable’, that doesn’t mean it has to be understood in the minutiae by everyone involved. It does mean that if someone needed further information, the chain of connections is available, transparent and presented in a digestible way. Think of it as a matter of translating automation-speak into human-speak.
We all use automated systems at some point. Many businesses already use AI for speeding up repetitive tasks and improving employee productivity.
However, explainable AI is needed as we expand how AI is used. As AI is applied to more significant tasks, which could affect things like future revenue streams for a business, there is greater call for ease-of-use and clear accountability.
As we move forward with this technology, the demand for transparency will grow, and it will come from all sorts of sources, from policy-makers to CEOs. People want to make good decisions, and to do that, they need the facts.
Explainable AI with Ditto
Our unique technology can deliver value that other AI systems cannot: information explained in a way that any human could readily comprehend. If we seem edgy, we want you to know that AI has every chance of improving human capabilities and knowledge, when done right:
‘I'm just a soul whose intentions are good,
Oh Lord, please don't let me be misunderstood...’ – Animals