Is AI more dangerous than nukes?

In her speech at the Inspire & Dine evening of the Centre for Digital Technology and Management (CDTM) on May 29th, 2018, Dr. Bozesan addressed an exciting question:


In her view, the question is not whether “AI is more dangerous than nukes.” It is not about whether or not “an autonomous car is more dangerous than a horse carriage or metal knife more dangerous than fire?” No, we must go well beyond these superficial discussions, because, technology development is part of being human. We must go much deeper and truly understand the bigger picture, namely, EVOLUTION. Not only evolution, but its underlying feature, namely EXPONENTIAL GROWTH.

Dr. Bozesan, used the Law of Accelerated Returns[1]to explain exponential growth of evolution.

She feels that, no matter how educated we are, the human mind is not able to truly comprehend let alone address exponential growth because we are linear thinkers by nature. Yet in the light of AI progress, we must take control and imbue that AI with wisdom and higher levels of consciousness, not just intelligence, or we will be eliminated as a species. The Law of Accelerated Returns refers to the returns, such as the speed, cost-effectiveness, or power of the evolutionary process that increases exponentially over time. In referring to Integral Theory by Ken Wilber, she insisted that evolution applies positive feedback in that more capable results from a previous stage of development are used to create the next stage. The next stage transcends and includes the previous stage. It is a process that results in exponential growth over time so thatthe rate of exponential growth itself grows exponentially, which is even harder for human to understand. Both the biological evolution as well as the technological evolution are such processes whereby technological evolution has evolved to support our biological evolution. And yes, the next paradigm shift is already occurring from biological thinking to a hybrid construct that combines both biological and non-biological thinking such as a smart phone, computer, nanobots, and so on. This is all here already, and it is inevitable. We are indeed moving from bio-humanism to neuro-humanism to post humanism.

Within the context of the question at hand, Dr. Bozesan feels that we must take control of what some future AI is allowed to do or not. Because there are different levels of consciousness we MUST take charge to ensure that not a Hitler but a Gandhi makes the decisions about the fate of humanity.Given the different stages of human evolution, we have a choice whether the future AI will be developed and controlled by an ego-centric or ethno-centric mindset that would create separation and nationalistic tendencies, as we have seen in the last USA election, or we make sure that the future AI is implementing a world-centric or even kosmos-centric level of consciousness for collective benefit. It is up to us.

In summary, it is her conviction that we risk being eliminated not by nukes, but by an intelligence without wisdom that we have the potential to collectively create if we do not take our leadership responsibility seriously. Herein lies our chance to create a future based on prosperity and abundance for all or on peril and extinction. Dr. Bozesan is convinced that AI is not more dangerous than nukes, but only if it is driven by later stages of evolution and wise human consciousness. We still have the reins of our future in our hands, but not much longer.