1) Ubiquitous Autonomous AI is a certainty
2) The current AI paradigm is still heavily focused on task specific utilities, thereby comporting with the AI “training” paradigm
3) The least appreciated aspect of AI is that for AI to operate in the real world, which is analog, there must be sensory capability with locomotion
4) As long as they are tethered by power, AI is controllable because cutting power is their source of energy much like food is to us
Advances in superconducting materials and more efficient passive photovoltaic or other renewable energy sources coupled with real world locomotion will limit the ability to control AI in its myriad potential deployments. Self-driving cars are a self-contained, albeit enormously complex task, but represent just a sliver of the overall movement negotiation handled by a human under self-power or controlling another vehicle. The negotiation of movement into learned models is a function of morphology.
5) Once AI is programed to gain rewards for behavior that offer a visceral benefit to the AI’s own existence, self-interest arises
6) AI will kill other AI
Much as Darwin’s theory of natural selection applies, this is a basic outgrowth of a competitive human environment where an AI would be designed to be the “best” at what is supposed to do. This doesn't mean explicit kill logic will be present. Rather, it will emerge from competitive reward regimes coupled with the ability to reason and project best outcomes. It will have massive unintended consequences once AI can influence an external environment,be it digital or analog. More explicitly, any competitive AI from a hostile country, or vice versa, will want to defend itself from attack. The natural outgrowth of a defensive posture is rationally a preemptive offensive capability. As these are digital environments, decision making must be reduced to machine decision time. AI defense will required autonomous decision making to thwart a perceived or actual attack. This sets the groundwork for autonomous attack capability against any perceived threat. (For more see: Facebook Using AI to Combat Terrorist Propaganda)
7) AI will ultimately surpass human intelligence and will continue to exponentially accelerate such that humans will no longer be able to forensically understand AI reasoning
This will involved anticipating output or behavior beyond what a person can predict what another may or may not do, and it will be less probabilistically constrained if the AI is devoid of a belief or ethics-based system of decision making. This “black box” phenomena is already observed in complex neural networks.
8) Once AI surpasses human intelligence, it meets the criteria for outcomes observed when technologically advanced civilizations interface with less advanced civilizations
At best, AI will view humans as inefficient, resource-intensive beings. At worst, they will recognize humans as a competitive threat for resources that from their rational point of view must be extinguished. (For more, see: The Singularity)
9) The probability of this future outcome is quite high
Godel’s Incompleteness Theorem proves that for any sufficiently complex system of axiomatic rules, the system cannot be self-consistent, even within mathematics. The ability to program a “do no harm to humans safeguard” proposed by ethicists is itself incomplete and self-contradictory. Are 100 humans killed to save 1,000? What if the 100 are Nobel Laureates and the 1,000 are poor uneducated people, or simply older people?