Automated decision-making has received a lot of bad press in recent years, mostly because people don't trust the technology behind it. A big part of that mistrust is due to black box AI, which produces outcomes from the murky depths of data, using algorithms. Explainable AI, however, is the bright future of artificial intelligence for business.
As any maths teacher will tell you, writing the answers won’t get you the best grade. You have to show your working to get top marks.
What is ‘black box’ AI?
Deep learning networks are made of a huge number of connections, and in the middle of the process sits lots of data and the algorithms that interpret that data. Black box AI is where we understand the inputs and we know the outcomes, but we don’t know how the AI has arrived from A to B.
For example, automated online diagnostics in the healthcare industry have been found to misdiagnose patients or operate too cautiously. In short, a headache could mean cancer, but it doesn't always mean cancer.
Some critical use cases of black box AI exist in healthcare to the military, where AI technologies are used in life and death situations. Without transparency, fear of the unknown can mean potentially revolutionary technology is subjected to heavy legal ramifications, hindering progress. There needs to be the right legislation around machine learning and AI to protect everyone (and everything) but it needs to be based on facts, not fear.
Good news. It doesn’t have to be this way.
What is explainable AI?
Explainable AI, as opposed to black box AI, provides transparency for the part of the artificial intelligence process where algorithms interpret data.
This means two main business problems are solved:
- Accountability – we know how an automated decision is reached and can trace the path of reasoning if needed.
- Auditability – we can review processes, test, and refine them more accurately, and predict and prevent future failures or gaps.
Ultimately, by opening up the black box of AI, we can build trust and, well, explainability.
You know how to use a phone. You know something about its workings, its sources, what it can do and what it can’t. But, if someone asked the average person to build a phone, they probably wouldn’t know where to start.
Just because something is ‘explainable’, that doesn’t mean it has to be understood in the minutiae by everyone involved. It does mean that if someone needed further information, the chain of connections is available, transparent and presented in a digestible way. Think of it as a matter of translating automation-speak into human-speak.
We all use automated systems at some point. Many businesses already use AI for speeding up repetitive tasks and improving employee productivity.
However, explainable AI is needed as we expand how AI is used. As AI is applied to more significant tasks, which could affect things like future revenue streams for a business, there is greater call for ease-of-use and clear accountability.
As we move forward with explainable artificial intelligence programs and begin to build better technologies, the demand for transparency will grow, and it will come from all sorts of sources, from policy-makers to CEOs. People want to make well-informed decisions, and to do that, they need the facts.
Explainable AI with Ditto
Our unique technology can deliver value that other AI systems cannot: information explained in a way that any human can readily comprehend. If we seem edgy, we want you to know that AI has every chance of improving human capabilities and knowledge, and when by removing the black box of AI, we can begin to build a brighter, better understood world.
‘I'm just a soul whose intentions are good,
Oh Lord, please don't let me be misunderstood...’ – Animals