OPINION: The views expressed by columnists are those of the author and do not necessarily reflect the views of Investopedia.
 
Innovation and technology advancements are replete with unintended consequences. In human myth, in fashioning the intricacies of the labyrinth, Daedalus made it almost impossible for Theseus to slay the Minotaur, the monster dwelling inside. Critical we consider here the long-term consequences of our inventions with great care, if for no other reason to be sure we do not find ourselves made captive therein.  
 
The volatility of cryptocurrencies is something we hear about a great deal. However, the volatility is just so much noise. The real story is why cryptocurrencies are becoming so big. The answer, in part, is that recent experiences, such as the Great Recession, have contributed to a mistrust of existing institutions and systems. Instead, there is evidence of a greater confidence by Millennials in technology-based counterparts.  Faith in humans has been lost, so in machines we trust (See: What to Learn from Millennials' Distrust of Banks). With this shift, there is a need to be mindful of both what is being handed over and how that transfer is taking place. There is clearly risk in the shift that demands an awareness of the greater context. Relative to AI, we offer the following observations below:
 

1) Ubiquitous Autonomous AI is a certainty

The dynamics of being able to control information and technology has long gone by the wayside. Open source development and the cooperative sharing of ideas across the globe via the internet render any ability to contain development ineffective. To our view, the internet-of-things (IoT) will be the mechanism of diffusion for ubiquitous autonomous AI throughout society. (For more see Zuckerberg and Musk Clash Over the Future of AI)
 

2) The current AI paradigm is still heavily focused on task specific utilities, thereby comporting with the AI “training” paradigm 

At some point, there will be algorithms that “think” in the sense that they will actively acquire real world information instead of being fed it. At that point, there will be a period of massive learning acceleration as big data becomes a well for accumulated experiences that can be shared and propagated. It is as if you had an experience that once learned could be shared by transplanting it to others.
 

3) The least appreciated aspect of AI is that for AI to operate in the real world, which is analog, there must be sensory capability with locomotion 

Machines can’t touch, taste, smell, or see anywhere close to the evolutionarily optimized world of human and animals. These inputs (i.e. the ability to experience physics, such as the feel of gravity, to judge the flight of a flyball with no input but the crack of a bat, a ball direction, angle and estimated force, what is rough and smooth on a relative basis, what smells pleasant) are all outside the realm of non-experiential AI, and calculating the physics of these phenomena are not equivalent.
 

4) As long as they are tethered by power, AI is controllable because cutting power is their source of energy much like food is to us 

Advances in superconducting materials and more efficient passive photovoltaic or other renewable energy sources coupled with real world locomotion will limit the ability to control AI in its myriad potential deployments. Self-driving cars are a self-contained, albeit enormously complex task, but represent just a sliver of the overall movement negotiation handled by a human under self-power or controlling another vehicle. The negotiation of movement into learned models is a function of morphology.

5) Once AI is programed to gain rewards for behavior that offer a visceral benefit to the AI’s own existence, self-interest arises

 At this point, the logical decision making of an AI becomes subject to different potential modes of thinking or decision making. These may include cooperative or competitive decision making processes or a combination using game theory (See The Basics of Game Theory)
 

6) AI will kill other AI

Much as Darwin’s theory of natural selection applies, this is a basic outgrowth of a competitive human environment where an AI would be designed to be the “best” at what is supposed to do. This doesn't mean explicit kill logic will be present. Rather, it will emerge from competitive reward regimes coupled with the ability to reason and project best outcomes. It will have massive unintended consequences once AI can influence an external environment,be it digital or analog. More explicitly, any competitive AI from a hostile country, or vice versa, will want to defend itself from attack. The natural outgrowth of a defensive posture is rationally a preemptive offensive capability. As these are digital environments, decision making must be reduced to machine decision time. AI defense will required autonomous decision making to thwart a perceived or actual attack. This sets the groundwork for autonomous attack capability against any perceived threat. (For more see: Facebook Using AI to Combat Terrorist Propaganda)

7) AI will ultimately surpass human intelligence and will continue to exponentially accelerate such that humans will no longer be able to forensically understand AI reasoning

This will involved anticipating output or behavior beyond what a person can predict what another may or may not do, and it will be less probabilistically constrained if the AI is devoid of a belief or ethics-based system of decision making. This “black box” phenomena is already observed in complex neural networks. 

8) Once AI surpasses human intelligence, it meets the criteria for outcomes observed when technologically advanced civilizations interface with less advanced civilizations

At best, AI will view humans as inefficient, resource-intensive beings. At worst, they will recognize humans as a competitive threat for resources that from their rational point of view must be extinguished. (For more, see: The Singularity)

9) The probability of this future outcome is quite high  

Godel’s Incompleteness Theorem proves that for any sufficiently complex system of axiomatic rules, the system cannot be self-consistent, even within mathematics. The ability to program a “do no harm to humans safeguard” proposed by ethicists is itself incomplete and self-contradictory.  Are 100 humans killed to save 1,000? What if the 100 are Nobel Laureates and the 1,000 are poor uneducated people, or simply older people? 

Is Blockchain The Key?

While the implication of the above nine observations is grim, better that we consider the prospects involved and take steps to guide and shape AI development for as long as the possibility exists. It is quite possible that Blockchain, the underlying infrastructure to cryptocurrencies, may offer some safeguard in the transparency it brings to non-human digital interactions. Certainly, though, there’s more at stake here than CryptoKitties. 

 

Want to learn how to invest?

Get a free 10 week email series that will teach you how to start investing.

Delivered twice a week, straight to your inbox.