Imprecision can potentially lead to unexpected, incomplete or biased results. These are the issues that cause businesses and legislators to worry. So, why is this a hurdle, and, does precision really matter?
Accurate vs precise vs ducks
With machine learning, performance is measured in two ways: precision and recall, otherwise known as accuracy. When looking for patterns in data, ‘recall’ determines how complete a result is, and ‘precision’ determines the usefulness of that outcome. Put simply:
- If you have a picture of three white swans, and an algorithm identifies three swans, that is accurate, but not precise.
- If it identifies two white swans, that is precise, but not accurate.
- If it identifies two ducks, that is neither accurate, nor precise.
You need a ratio of both measurements to determine the performance of AI. But, scale that up and throw in some complex categories and goals, and suddenly you need a lot of processing power to train and run AI models with precision. It’s easy, then, to run afoul.
An issue with measuring precision
Famously, in 2017 it was claimed that algorithms are now able to label images with a margin of error of just 2.5 percent compared to the human average of five percent. But people have since found it very difficult to close that gap even further, and some researchers are finding that the measurement for performance may have skewed these results.
Studies at several universities have shown that image recognition is fraught with potential bias and imprecision. Researchers have collected together a bank of images that even the world’s most advanced AI can’t make heads or tails of. For them, even the most advanced AI inaccuracies are as high as 98 percent.
So why is AI image recognition still an industry set to reach a market value of US$ 5.32 billion by 2024, up from 1.81 billion in 2018? Well, because time equals processing power in the world of AI, and if you can spend more time training algorithms then you’ll still see better performance. The trade off? Precision.
Google thinks precision doesn’t matter…
… Or, rather, Google thinks we can sacrifice some precision in order to spend more time (processing power) training AI.
They found that to make AI models ‘learn’ more, faster, when faced with large, complex data sets, then ordinary graphics processing units (GPUs) won’t cut it. Their Tensorflow team created the Tensor processing unit (TPU) that, among other things, rounds calculations to the first decimal point, making one TPU able to train AI 27 times faster than 8 GPUs.
When in doubt, check the maths
Maybe, with recent news about Google’s developments in quantum computing, the theoretical cross-discipline of quantum AI will become a reality, letting us do more with data than we’ve ever been able to do. Then, we can marry speed and precision and truly accelerate the capabilities of AI. Until that time, we may need to watch how we prioritise the need for speed and the real-world implications of imprecise AI.
To ensure your business has the right checks and balances in place, look to explainable AI. At Ditto, we offer a way to build automation into your decision-making with explainable AI, which means you can audit results and see why recommendations have been given. Which, for better business decision-making, is precisely what the world needs.