AI End Game

This is a partial warning as much as it is my simple and developing view on Artificial Intelligence (AI). It is one of those transformative technologies that has now progressed to the point where I don’t think it’s possible to put the genie back in the bottle. Perhaps its timely that a movie such as Oppenheimer, detailing the Manhattan Project and the creation of the first nuclear weapons has recently been released. It should serve as a cautionary tale for all those blindly espousing the virtues of an AI world as it has the potential to cause great harm and negatively change society as we know it.

I have previously written about the exciting investment opportunities that AI brings to the world. However, I am also convinced that from an existential perspective, in my opinion, AI is also the single biggest threat that the world faces today. More serious than climate change, more serious than nuclear war. Those matters are within our control as humans. AI and its progress from a certain point in the near future are not.

The world is at a similar point to where it was back in WWII when the Manhattan Project commenced. Back then it was the possibility that the Germans may have access to or may soon be able to develop nuclear weapons that lead the USA to actually create them.

Ironically, the best strategy for defending the world against the potential weapons led to the creation of those very weapons of mass destruction that cannot be reversed. But what if the USA didn’t create them, quite possibly it ends up later down the track being something that Germany did create. What happens then? That’s the alternate argument for what happens if we don’t progress with AI hard.

If I sit back and forget the obvious economic and investment opportunities and consider the future of the world (probably something worth thinking about), I am certain that the advancement of AI will lead to catastrophic consequences for humans. The prospect of a dystopian future might sound like science fiction until it happens, then it’s simply called science. It doesn’t keep me up at night because it’s not something any of us can control. The cat is out of the bag.

But unlike many who see the potential for catastrophe, I do not necessarily believe we should overregulate or slow the development of AI. The reason for this is that other countries will not slow down in their development of AI technology. Similar to the rationale for the US racing to create nuclear weapons before the Germans. Whether it’s Russia or China, in a similar vein to the development of nuclear weapons, the risk that enemies advance more quickly is the bigger threat.

AI is more dangerous in an existential sense for the human race simply because we risk not being able to control it once it becomes more advanced. This is self-learning technology, so the technology will advance exponentially to human level, and then beyond to the point where humans become redundant. That day will arrive at some point in the future, whether it’s 10, 50 or 100-years’ time, I don’t know, but it will certainly arrive. When it does, do we want that technology to be a western influenced AI or one that’s been designed and developed in Russia or China?

If AI once debased from the control of humans can cause bad consequences for us, then it stands to reason that it can also create good. So, if we apply the same game theory to this as we did to the development of nuclear weapons then the answer is to move as quickly as possible so that whatever the result it is a consequence that we control as much as possible before others do. It’s the best available decision based on the fact enemy nations may otherwise create the AI that controls the world.

It’s important to note that while for years mutually assured destruction is the theory that has kept the world’s nuclear powers from starting a nuclear war for the past 70-plus years, that will likely not apply to the world of AI. The strategy must move quickly with investment in AI technology from government that can counter the rising danger of AI from enemy nations at the same time as developing AI for the benefit of the world. It is not the best solution, the best solution would be to stop it altogether, but that is unrealistic. It may be counter-intuitive, but I believe the best practical solution is to move as quickly as possible and build the AI resources that can combat those who would do us harm.

General Advice Disclaimer: This information is of a general nature only and may not be relevant to your particular circumstances. The circumstances of each investor are different, and you should seek advice from an investment adviser who can consider if the strategies and products are right for you. Historical performance is often not a reliable indicator of future performance. You should not rely solely on historical performance to make investment decisions.