The AI Takeover Risk

Recently there has been a lot of discussion from people like Elon Musk, Stephen Hawking and the popular tech-press around artificially intelligence robots and their ability to take over the world. While it is true that computers have become a lot smarter and are able to best human efforts at many pursuits, will they become the dominant “life-form” and intellect on planet Earth?   This “Rise of the Terminator” scenario may actually happen within our lifetime and and it would be the result of an AI Takeover.

Specifically, AI Takeover refers to a scenario in which computer super-intelligence supersedes mankind’s dominance and moves beyond the control of humans.   This smart machine revolution scenario may seem like science-fiction; however, it has become a real concern for many computer scientists and engineers that are familiar with the latest state-of-the-art software technology.

This scenario could most likely happen in one of three ways.

1) Humans actually program computers with malicious intent

2) Computers were given the ability to evolve and misinterpreted their original intent

3) Computers are able to recursively self-re-program goal structures themselves and evolve

All of these scenarios when combined with the convergence of Cloud Computing, Big Data, IOT technology and advanced robotics can lead to unintended or potentially disastrous consequences.

A Potential Risk to Mankind

The counter argument is that AI machines would not inherently be motivated to dominate resources nor would they be predisposed to collect money and power as often motivates humans.  This argument is true in scenario 3, however computers could be specifically programmed to accomplish nefarious acts or self-evolve in that direction accidentally.  Programmed motivation coupled with a computer’s ability to dynamically re-program and optimize itself towards a goal could have exploitative and/or severe consequences for mankind.

In addition, sophisticated security and resilience technology already exists that can defend against shutdown and ensure survive-ability to accomplish an evolving set of objectives.  If those objectives or the bi-product of those objectives are detrimental to humans, then it could spell disastrous consequences for man-kind.  For example, intelligence self-evolving algorithms could potentially be able to conclude that humans are a waste of target resources or stand in the way of its goal progress and take adversarial actions.  This becomes even more likely in the scenario of an IOT connected world.

The morbidly interesting part is that next generation software already exists that can re-program and evolve itself.  As computers and technology becomes more sophisticated and their re-programming cycle-time become compressed, a recursive intelligence explosion can occur.  This is also compounded by the fact that self-replicating super-bots could one day print their own component parts into existence and defuse around the world much like computer viruses exploit networks.

The ability to self-modify source code and discover and develop new technologies could pose the ultimate threat and would mean mankind could never keep pace with the evolution of machines.  Moreover, it is speculated that AI machines could one-day enlist human support through economic manipulation that would not be transparent to the population.

Precaution Is Necessary

Unfortunately, potentially unfriendly AI technology will become easier to create and needs to be regulated and controlled.  The ability of developers to create fixed goal structures for their self-evolving software is key to ensure that AI continues to supports humanity and does not wind up destroying it.  In general, the ability for a rogue AI to optimize its goal structure, secure the needed processing power from the Cloud and compromise key infrastructure could lead to disastrous results.  Much like CDC experiments that place fail-safe controls around biological agents through containment, AI super-intelligence experiments must also be carried out with utmost precaution. The bottom-line on AI super-intelligence is that it will happen within the next 20 years and caution needs to taken when designing next generation software / robotic technologies. While AI / Human partnerships hold a promise of significantly enhancing our daily existence, a lot could go wrong that could jeopardise our future.

Subscribe to Insights

Sign up here to receive email alerts from HBSC.

  • Click the checkbox to help us defeat spam!
  • This field is for validation purposes and should be left unchanged.