Ditto AI
Blog

Doing business in 2030: what you need to know about the future of AI

Posted by Ditto on Nov 6, 2019 8:50:00 AM

In the summer of 1956, in the sleepy town of Hanover in New Hampshire, the founding fathers of artificial intelligence gathered at Dartmouth College for a conference to discuss the science and engineering of making intelligent machines.

John McCarthy, together with Alan Turing, Marvin Minsky, Allen Newell and Herbert A. Simon, knew from science fiction that intelligent machines could change the world as we know it.

They’d read See Isaac Asimov’s book ‘I Robot’ (which was first published in 1950) and they’d debated at length Asimov’s Three Laws of Robotics, which were designed to prevent intelligent machines from turning evil.

But while robotics and intelligent life lay at the heart of their conversation, they knew little about how their ‘discovery’ would impact and change businesses. After all, in 1956, artificial intelligence was only speculation.

Since that time at Dartmouth, artificial intelligence has remained at the forefront of the minds of businesses and governments around the world. Everyone involved believed that intelligent machines would provide real benefits, including:

  • Automating mundane and time-consuming tasks.
  • Predicting future outcomes as reliably as possible.
  • Working to prevent death and destruction to humankind.
  • Freeing humans from war and dangerous jobs.
  • Eliminating human error and catastrophic human mistakes.
  • Better informed decision making.

With these benefits in mind, it was only a matter of time before safe and usable intelligent machines were developed and commercially adopted - all that needed working out was the ‘how’.

 

A timeline of artificial intelligence so far

Chapter-1-01

To understand what the future of AI looks like, we must first revisit its history, for it is in these lessons that we can begin understanding what the next decade of intelligent technology looks like.

Throughout its 63-year lifespan, there have been key milestones in the development of artificial intelligence, and we’ve had to adjust our thinking for how we build these machines.

 

1956: A top-down approach

The initial framework for developing AI came from Marvin Minsky, who believed a top-down approach of pre-programming a computer with the rules that govern human behaviour would provide the greatest results.

 

1969: A disappointing reality check

By 1969, when researchers were struggling to bring to life tangible intelligent machines, the high expectations of AI set by the founding fathers came crashing down, despite the fact that the first intelligent machine, Shakey the Robot, was born.

Shakey took six years to develop, however, and it would have to update its map every time it moved. If an object entered its path, it could take hours for it to plan out its next move.

Watch Shakey reason with its surroundings in this video:

 

1973: The Lighthill Report

By the 1970s, governments and private investors had spent millions on trying to develop artificial intelligence, and they had little to show for it. By 1973, leading mathematician Professor Sir James Lighthill gave a critical health report on the state of AI in the UK. Consequently, funding was slashed, and development stalled.

Watch Lighthill and McCarthy debate the Lighthill report on the BBC:

 

1981: Big business looks to AI for solutions

It wasn’t until the 80s that people began to understand the benefit artificial intelligence would have on big business.

Instead of trying to replicate a human with an all-encompassing machine, businesses realised that smart technology could be used to perform narrow, mundane and repetitive tasks.

Download our guide to responsible AI for your business

By 1986, a system called RI was saving the company Digital Equipment Corporation an estimated US$40 million a year. It wasn’t long before other businesses followed suit.

 

1990: A bottom up approach to AI

Rodney Brooks, a successor of Marvin Minsky at MIT, came to the conclusion that, rather than pre-programming an intelligent computer to follow rules, we should be decentralising different modules and making them work together so that they can recognise patterns, much like the brain does with neural networks.

 

2002: Robots entered the home

In 2002, Rodney Brooks’s side business created the Roomba, the first self-steering vacuum cleaner that required no human behind the controls. They’ve sold more than 20 million units worldwide.

 

2014 until present: Driverless cars, real-time voice translations and more

Commercial adoption of artificial intelligence technologies began around the time Google invested US$1bn into driverless cars and Skype launched their real-time voice translation feature.

That was in 2014.

Since then, we’ve seen AI personal assistants like Alexa and Google Home flood the household, and we’ve seen Elon Musk deploy a commercially viable driverless car: the Tesla.

 

Doing business in 2030: the future of AI

Chapter-2-01-1

So, now we understand its history, it’s time to tackle the predictions for artificial intelligence in the future.

Today, in 2019, we’re beginning to witness the ethical dilemmas of letting intelligent machines make decisions and recommendations on behalf of humans.

In the healthcare industry, for example, there are a plethora of factors that go into diagnosing a patient, and artificial intelligence technologies can easily misinterpret these and produce false results.

Last year, the AI doctor app Babylon received a complaint because its automatic symptoms checker didn’t properly diagnose a heart attack. What’s worse, because of black box AI, humans can’t decipher how the technology reached this decision.

But AI in healthcare isn’t the only concern of smart machines. Law enforcement agencies are using AI in facial recognition technologies to try and solve crimes, but, people would argue that we’re using these technologies in an immoral way. We’re invading privacy; we’re breaching human rights; and we’re assuming guilty until proven innocent.

In the next decade, then, we expect to see many developments in AI. Here are our predictions for the future of AI and doing business in 2030.

 

1. Human ethics and morals will prevail

Chapter-3-01-1

In the early 1970s, almost a decade after the term ‘artificial intelligence’ was coined, we experienced the first AI winter. At this time, there were many questions around whether AI could in fact improve the workplace and deliver a brighter, better future for humans, and the AI hype was beginning to enter a slump.

Right now, we’re facing similar issues with AI. We’re questioning the ethics and morals of using these technologies in everyday life, and unless things change, we could see another slump in AI adoption in the next decade.

Amazon’s facial recognition software, Rekognition – which is sold to law enforcement agencies in the United States – is at the centre of such controversy. Recently, the company was forced to hold two shareholder votes. The first vote was to try to ban governments from using the technology, and the second called for Amazon to commission an independent study into the risks posed to the general public's privacy rights.

Both votes failed, and many Amazon employees are pressuring Amazon’s CEO, Jeff Bezos, to stop selling its facial recognition software to law enforcement agencies. And they’re not the only ones. Civil liberties group American Civil Liberties Union (ACLU) organised a petition of more than 150,000 signatures and delivered it to Amazon HQ, stating that they want Amazon ‘to end its practice of selling its dragnet surveillance system, Rekognition, to local enforcement.’

If we continue to ignore the issues surrounding privacy, security and anonymity – if we fail to make AI accountable for its decisions – AI development could simply come to a halt.

Fortunately, however, some companies are doing the right thing. The Partnership of AI brings together many major technological players (including Amazon) and has one sole purpose: to address the ethical issues of AI. As explained by Carol Rose, the Executive Director of ACLU, on the homepage of the Partnership of AI website:

‘We need human rights and civil liberties at the heart of science and data to develop new technologies in a way that’s beneficial to everyone.’

 

2. Autonomous vehicles will be commercially adopted

Chapter-4-01-1

If you look hard enough, you might see a driverless car in amongst the human-driven vehicles somewhere on the streets. By 2030, this sight won’t be uncommon. What’s more, autonomous vehicles won’t be limited to cars.

Boats, planes and trains will likely follow suit, and we’ll see AI-first vehicles transporting their human creators from A to B.

For example, a subsidiary company of General Motors, called Cruise, has promised to deploy a large-scale driverless taxi service in San Francisco by the end of 2019. However, there are safety concerns halting the development of the project.

In the skies, the United States Air Force intends to deploy AI in many fighter jets, drones and cargo planes, which will allow these machines to better interpret, organise, analyse and communicate information on their own, or with minimal human assistance.

At present, it takes many people to pilot an autonomous drone. If we can use AI to quicken the intake and analysis of data, however, we could see one person piloting many autonomous drones at once, initiating a ‘command and control’ function across many vehicles simultaneously.

 

3. Home service robots are a likely story

Chapter-5-01-1

Yes, you (probably didn’t) hear it here first, but AI home assistant robots will commercially enter the household in the next decade, likely replacing the ‘on the shelf’ home assistants of today (like Alexa).

As safety concerns are addressed and we begin to build regulations around civil liberties, the ominous resistance associated with robots at home will diminish, and we expect a company to make an affordable home assistant robot to clean the dishes, wash the car and scrub the toilet.

Further, thanks to modern medicine, people are living longer, and while positive, an ageing population puts pressure on a country’s care giving resources. Home assisted robots, however, will work to alleviate this pressure and automate some of the maintenance responsibilities that are part of a caregiver’s role. This will free up time for a human caregiver, who can focus on spending time and building relationships with the elderly.

 

4. The quality of healthcare will improve

Chapter-6-01

The healthcare sector is a promising area for AI, and in the next decade – as AI technologies become more trusted and accountable for their decisions and error rates begin to reduce – we’ll see a mass adoption of AI in healthcare systems around the world.

In many countries – where healthcare is underfunded and under-resourced – companies are deploying AI apps and we’re beginning to reap the benefits of better efficiencies. In 2018, for example, Google launched an AI service in Thailand to screen for diabetic eye disease that causes blindness. The technology has a 95 percent success rate, compared to human doctors who have a 74 percent success rate, according to a joint study by Google and the Thai state-run Rajavithi Hospital.

By 2030, then, AI technologies will be regularly sifting through patient medical histories and health data to uncover potential risks and making recommendations to a doctor; we’ll see robotic arms installed in hospitals to assist in surgeries; we’ll see more AI-first mobile apps working to proactively spot potential risks in patients; and we’ll likely visit the GP and not interact with a human.

 

The biggest barrier to the future of AI

Chapter-7-01

While improving healthcare, developing driverless cars and deploying home service robots will likely come true in the next decade, the ethical and moral dilemma that we face with AI technologies today could impede this progress.

Let’s consider the space race as a good comparison. Back in the 50s, 60s and 70s, the world was working around the clock to try and put a man on the moon, and in July 1969, the United States did just that.

For a while after, we sent a collection of other men and women to the moon to conduct a variety of experiments, but by 1972, the hype of visiting the moon died, and the cost of doing so didn’t justify the means.No person has stepped foot on the moon since, and the development of space programmes across the globe came to a halt.

When we consider artificial intelligence, we must ask ourselves the same question: Is the development of efficient and intelligent technologies worth the risk to human privacy and security? Elon Musk, a sceptic of AI, stated last year:

‘Mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.’

And he’s not the only one to share concerns. In a wider context, Stephan Hawking famously stated in an interview with Wired magazine that:

‘If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.’

But, when all is said and done, we shouldn’t fear the development of AI and where it is going. These technologies should be embraced because, time and time again, we see the benefits of cost savings to businesses, improved standards of service to customers and patients, and greater efficiencies that make everyone’s lives that much easier. It’s simply a matter of ironing out the creases as we go.

The biggest hurdle, then, is not ‘how do we develop AI technologies?’. Rather, we must ask: ‘how do we safely regulate the technologies we’re building?’

The answer, in part, is in making AI trustworthy. Only then can we build a responsible future.

Download our guide to responsible AI for your business

Topics Explainable AI Future of AI