A.I. WILL EITHER BE THE BEST, OR THE WORST THING, EVER TO HAPPEN TO HUMANITY

– Stephen Hawking (1)

A SHORT SUMMARY

Artificial Intelligence and its cohorts machine learning and cognitive computing (A.I., for simplicity) are going to make the digital disruption of the last 15 years look like child’s play. A.I. will be the operating system for our brave new world. Its enablers will be processing power, mobile computing, billions of sensors, the cloud, data and analytics. And a host of other new technologies.

If you’re a business leader, owner or Director you have an urgent obligation to work out what this means for your business, for your customers, your employees, your suppliers and your shareholders. A.I. will sneak up on you much faster than you can imagine.

Ubiquitous software algorithms are already having an enormous impact on our lives. While most algorithms aren’t yet true A.I., software can already write code to improve itself. When machines start to practice recursive self-improvement, they will out-think humans.

Futurist Ray Kurzweil predicts that within 30 years “The Singularity” (2) will occur, fusing together both technology and humanity. The optimists believe in a limitless future where disease is cured, prosperity abounds, the environment is restored, and the quality of life is high.

The pessimists believe that A.I. will take over the world by itself or fall into the wrong hands. In this construct, the pop culture imagery of movies becomes reality, with devastating consequences to humanity. Elon Musk, Bill Gates and Steven Hawking believe that A.I. may represent an existential threat. However, no one really knows what will happen.

A.I. will play a meaningful role in the predicted demise of 80% of the world’s top 100 companies over the next 30 years. Certain industries are already being affected by algorithms, and will be profoundly affected as true A.I. develops.

There are no rules governing the development and propagation of A.I., and there are unlikely to be any time soon. By the time Governments realise they need to legislate, it will already be too late.

Record levels of capital and research are now being devoted to A.I. An inflection point is getting closer all the time. Whether you believe in the optimistic or the pessimistic scenario, there’s no doubt that A.I. is coming, and its coming fast. Is your business even vaguely prepared?

SOME BASICS

Let’s start with some basic definitions. In a voice search I asked Google, “what is artificial intelligence?” The answer, spoken back to me, was something like this:

“the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.

I did the same for machine learning and Google provided the following response:

“Machine learning is a type of A.I. that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can change when exposed to new data. The process of machine learning is similar to that of data mining.”

And finally, on cognitive computing:

“Cognitive computing is the simulation of human thought processes in a computerized model. Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works.”

Tim Urban in his “Wait but Why” column has written a piece worth reading on A.I.: “The A.I. Revolution: The Road to Superintelligence.”(2) Urban tells us that there are three phases of machine intelligence: (1) Artificial Narrow Intelligence (narrow tasks currently being performed by assistants such as Siri and Alexa); (2) Artificial General Intelligence (when the machine becomes as smart as humans) and (3) Artificial Super Intelligence (when the machines work together and become significantly smarter than humans).

The more that we humans allocate resources to artificial intelligence, the sooner we will create a state of recursive self-improvement. Already, instead of humans writing code, some software can write its own code in repeated cycles of improvement. Many futurists believe that we will reach Artificial Super Intelligence within 30 years, and when that happens either (a) man’s era of dominance will be over, or (b) man and technology will fuse, to create a new paradigm.

Let’s examine the environmental factors and the hype to try to work out what comes next. I’ll start by revisiting my Digital Tsunami blog and keynote series on digital disruption (4):

THE DIGITAL TSUNAMI

In 1980 Alvin Toffler told us in his book “The Third Wave” that the agricultural revolution took thousands of years, the industrial revolution took hundreds of years and the technology revolution would take place in but a few decades. Prescient.

At the epicenter of the third wave are connectivity and processing power. In this century alone, we’ve seen the rise of mobile computing and the cloud, the transformation of software into a service, the fragmentation of marketing, the rise of sensors and the Internet of Everything, the Sharing Economy, and now drones, 3D printers, autonomous vehicles, private sector rockets – a long list. Change is happening around us with blinding speed, and no one can keep up.

We have seen change manifest itself in many ways. In the substitution of technology for services that previously required human effort. Like retail, travel, printed media, music and books. In the demise of great companies that lost their market power and competitive advantage. In jobs lost to new technologies. In new jobs created. In greater convenience. And that’s just for starters.

A.I. is going to be the glue that brings together new technologies – many of which currently look like discrete advances – into a controllable, living ecosystem in which the pace of change accelerates faster than we could possibly imagine.

The components of this ecosystem are being built all around us. The experts predict that by 2020 there will be 50 billion connected devices, 4 billion smart phones, a billion homes with wi-fi and 100 million connected cars. In 10 years’ time, there will apparently be 20 networked measuring sensors for each person on the planet. Nearly everything will be measurable.

With the enablers in place, algorithms are silently enhancing our lives in diverse ways: by scanning X-Rays better than doctors can, by touching 7 out of 10 transactions conducted by financial institutions, by generating the news we read, by telling us what to buy at Amazon, what to watch on Netflix and what to listen to on Pandora. We can now do a voice search on Alexa, Amazon’s operating system for the home, to buy products – bypassing Google Adwords and traditional retailers. Customer service functions are being rapidly infiltrated and will be substantially taken over by bots. And so on.

My view is that A.I. will be massively disruptive and that not enough effort is being put in by companies in the Asia Pacific region to assess how it can benefit, or threaten them.

BUT IS A.I. ALL JUST HYPE?

There’s no doubt that a multitude of environmental factors are coming together to provide the stage for A.I. to have a dramatic impact. Equally, we aren’t yet in the age of true A.I., so there is a degree of hype.

Ian Bogost writes in The Atlantic (5) that A.I. has become a fashion for corporate strategy, noting that Bloomberg economist Michael McDonough has tracked a large increase in mentions of A.I. in earnings call transcripts over the past two years.

Despite all the publicity, R. L. Adams writes in Forbes (6) that most of the so-called A.I. in use today is simply software algorithms with very narrow uses, whereas true A.I. can learn on its own – e.g. Google’s DeepMind. “True” A.I. can improve on past iterations, getting smarter and more aware, allowing it to enhance its capabilities and its knowledge.

When asked what A.I. should mean, researcher Charles Isbell (5) said “making computers act like they do in the movies.” He suggests two features necessary before a system deserves the name A.I. Firstly, it must learn over time in response to changes in its environment. Secondly, what it learns to do should take humans some genuine effort to learn. In other words, a robot that replaces human workers on a factory line probably isn’t A.I., but a machine programmed to automate repetitive work.

FOLLOW THE MONEY

The stakes are very, very high. Billions of dollars are being thrown at this opportunity. Every major technology corporation has an A.I. programme, and there are more well-funded A.I. start-ups than at any time in history. It’s an arms race.

Logic says the money is flowing in because commercial outcomes are in sight, and investments in A.I. will at some point soon begin to pay off.

So, hyped A.I. may be, but the weight of money behind it cannot be ignored.

THE OPTIMISTS

R. L. Adams also theorises in Forbes (6) that quantum computing will render A.I. smarter, faster, more fluid and human-like. Life’s most complex problems will be solved, including the environment, ageing, disease, war, poverty, famine, and so on.

When do we think this will occur? The experts believe that A.I. will develop greater capability than humans in almost every field somewhere between 2020 and 2060. There are volumes of academia written on this subject. Ray Kurzweil, Head of Engineering at Google, renowned inventor and futurist, and author of “The Singularity is Near” predicts that the law of accelerating returns will drive an exponential increase in technologies through computersgeneticsnanotechnologyrobotics and A.I.. When the Singularity is reached – within 20 years – A.I. will become infinitely more powerful than the sum of all human intelligence.

It makes the computer winning at Jeopardy, at Go, and at Poker look rudimentary, doesn’t it?

THE PESSIMISTS

Not everyone thinks that A.I. heralds an utopian era. Some of our greatest minds including Hawking, Gates and Musk believe that A.I. could deliver the opposite result. In which case, the dark pop culture depictions of Hal 2000 or Skynet would not necessarily be pure fantasy.

If unbridled, a centralized artificial intelligence would control what we know, what we think and how we act. All of this leads to the inevitable conclusion that at some point the bad guys – whomever that may be – could one day influence us through algorithms and cause us to take actions for their own evil ends.

So concerned are some of the thought leaders on A.I. that they founded OpenAI, an institute devoted to studying A.I. and encouraging its use for the good of mankind.

THERE ARE NO RULES

There are no rules governing the development, propagation and control of A.I., nor are there likely to be rules any time soon. The world’s various legal systems and legislatures have little or no basic understanding of the problems and are not equipped to anticipate and regulate.

(If major nations cannot even agree whether climate change exists, how to deal with humanitarian crises, or even deal in a uniform way with the Sharing Economy, how are they going to put A.I. safeguards in place?)

I am convinced that by the time that Governments realise they need to legislate (assuming that legislation can actually do anything!), it will already be too late.

THE BIG NUDGE

Scientific American (7) postulates that our personal freedom of thought is at risk of being hacked by algorithms. That instead of we humans programming the machine, the machine is beginning to programme us through “persuasive computing”. The theory is that computers are beginning to use sophisticated manipulation technologies to silently influence and guide us through life, whether in the formation of our political views, in how we spend our own money to buy goods and services, or in how we complete intricate processes at work. In most countries, this can occur because a dominant search engine or social media platform can be used to nudge us towards political, social and commercial outcomes.

If algorithms can predict our likes, our dislikes and behaviours, they can surely serve up news and products that match our profiles. And because we are being served up reinforcements of how we think and what we like, we start to think it’s all our own idea. In effect, we can now be manipulated by algorithms into doing things that we might not ordinarily do if not for the online prodding we receive every day.

Politics and the business of Government are an obvious and topical use case for persuasive computing. Political candidates and governments are engaging in social programming, influencing our behaviour via algorithms that make us think a certain way, vote a certain way, eat a certain way, exercise a certain way, and so on.

In Southeast Asia, a Government programme originally launched to protect citizens from terrorism is now being used to drive economic and immigration policy, the property market and school curricula. In China the search engine Baidu, with the support of the military, runs the China Brain Project, applying learning algorithms to the data collected on Baidu’s users. Commentators speak of “Citizen Scores” determining normal peoples’ eligibility for the things that we take for granted, e.g. loans, jobs, or travel visas (7).

Business is now taking consumers down a similar although subtler path, subjecting us to a range of phenomena such as personalized pricing and credit checks. Today, credit scoring algorithms may actually infer our creditworthiness from reviewing our social media profile rather than our financial records.

So, we are now under the surveillance not only of Governments, but also of the major tech companies. There is a dark side to consider. Personalized data feeds can be so influential and pervasive they have the potential to destabilize relatively cohesive societies through a process of thought balkanization. There are obvious examples. This topic deserves a separate paper.

THE NEAR FUTURE

It’s clear that there are many unanswered, big questions at play here. I can’t tell you if the Singularity is near, or whether the optimists or pessimists are going to prevail. However, we can predict with relative certainty what is going to happen in business over the next 5-10 years.

This is big, it’s threatening, it’s coming fast, and your business is almost certainly going to be affected – for better, or for worse.

If you play a governance or management role, you’d better be studying the impact of A.I. on your business and its stakeholders. Surely, to not attempt to do so is to risk one day wondering “what just happened?” No doubt that’s what the former Directors of Kodak are still wondering.

In our new series of Digital Tsunami (4) blogs and keynotes, APD will look into the likely effects of A.I. on a range of industries. Stay tuned for the impact on marketing, on retail and financial services for starters.

APD delivers digital growth for its clients through integrating digital strategy, technology, performance marketing, customer retention and analytics across the Asia Pacific region. We believe that A.I. will be central to our own, and to our clients’, journeys.

Some sources:

1.       https://www.theguardian.com/science/2016/oct/19/stephen-hawking-ai-best-or-worst-thing-for-humanity-cambridge

2.       https://en.wikipedia.org/wiki/The_Singularity_Is_Near

3.       http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

4.       https://www.linkedin.com/pulse/directors-your-kodak-moment-roger-sharp

5.       https://www.theatlantic.com/technology/archive/2017/03/what-is-artificial-intelligence/518547/

6.       https://www.forbes.com/sites/robertadams/2017/01/10/10-powerful-examples-of-artificial-intelligence-in-use-today/#420de57b420d

7.       https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/

Recent Posts

Leave a Comment