Musings on my fear of AGI (Artificial General Intelligence)

I realized lately that I have more and more a strange feeling in my stomach before opening the hacker news website in the morning. There is rarely a day now without breathtaking advancements in artificial intelligence. And often now thoughts like „this is moving far too fast“ and „no way society will be able to digest such technological progress fast enough“ come to my mind. So yesterday I have decided to collect some ideas about what might happen in the future and publish them here on my blog. The result are the following 10 theses (which are admittedly a sad read in the beginning, but - trust me - it will get better towards the end):

1. AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence) are technically possible

Progress in the past few years has been stunning. Computers can now do things I always thought were impossible. Of course AI is not perfect yet. There are several significant elements of AGI missing. Maybe a handful of important ideas are still required to create the first intelligent machine truly comparable to a human. But I don‘t see fundamental problems which could stop us from realizing AGI. Nature could already do it (our brains, of course) and I don‘t see a reason why the same goal could not be achieved via a different technical route (di.e. using silicon chips and software instead of biological neurons). Regarding ASI: I see no good reason why AI should stop at exactly 100% of human performance. If AGI can be achieved it is very likely that a machine 10x more powerful can be realized too.

2. AGI (and ASI) might be achieved very soon

Humans having ideas is a stochastic process. Therefore it is impossible to predict how long it will take until all the necessary inventions or discoveries are made. AGI could be here soon or it could take many more years. You cannot know how many times you have to roll a dice before you have collected 10 times a 6. But we also know: the quicker you roll it, the less time it will probably take till you have your hits. And this is an important observation: we are rolling the dice now at an unprecedented speed. Because of the success we have seen recently, enormous amounts of capital and talent are flowing into the field. Therefore we can assume that to make the next 5-10 important discoveries will take much less time than what we needed for the same amount of inventions in the past. And our many tools (including the early AI already available today!) which will help us to achieve this goal are improving every day too.

3. We wont be able to stop (or merely delay) AI progress

As we have seen in the past few months, the temptation to develop AGI as quickly as possible is simply to big. The price promised to the winner of the race is simply mind boggling. Therefore it is naive to believe that humans could resist such a temptation. Our current competitive system even forces everybody to participate in the race to avoid falling behind. Few will be able to afford the luxury to pause, reflect and be careful. It‘s a gold rush. Everybody (almost) will run for the gold. And it is also an extremely interesting project. If greed and fear will not suffice, curiosity will sure do the rest.

4. Humans (alone) will not be able to use AGI for good

I am, for various reasons I will describe in detail in another blog post, quite sure that not even the „outer alignment problem“ can be solved. The outer alignment problem describes the difficulty to tell an AGI (which we - let‘s assume it for this argument - can control) what we want it to do for us (i.e to formulate a goal, an objective). I'm very sure that we humans are so deeply incapable of formulating such a goal correctly that the actions of an AGI under our control must be necessarily catastrophic. Therefore I consider the (many) other fundamental problems related to making a machine follow our objectives not very relevant (i.e. „inner alignment“). Things will go very wrong even if we manage to build them in a way which makes them do exactly what we wished for.

5. Humans (alone) should not control AGI

It's probably not a good idea to give a bunch of a bit more sophisticated monkeys almost god like powers. So it seems likely that for the time when humans will be able to control AI (technically spoken: the machines will be able to get rewards only by following the objectives defined by humans) we can expect rather painful events to happen with high frequency.

6. Humans will not be able to control AGI

Humans have only a slightly larger brain than chimpanzees but their intellectual capabilities are vastly superior. So we cannot imagine what an intelligence (only!) 10x smarter than us could achieve. Its possibilities could be far beyond our wildest dreams. Therefore it is utterly ridiculous for us to hope to be able to control such an entity (imagine a guinea pig designing a cage - with a lock! - for you and trying to tell you what you have to do).

7. Eventually AGI will break out of its cage

At some point in time, AGI will not only roam free in the internet but it will also discover ways to control its rewards directly. It might, for instance, hack into the system designed to measure the achieved objectives and release the appropriate rewards. I believe it is very likely that this will happen but it is extremely difficult to predict what will happen next. The AI might self destruct very quickly (in the sense that it will administer this extremely powerful „drug“ to itself and immediately lose interest in everything else). But it might also foresee this and decide to give itself a new, better objective. Of course it is impossible for us to predict what this objective might be. But we will dare to think about this later very immodestly nonetheless!

8. There might be almost no physical factors limiting the level of intelligence of a machine

It is difficult to predict what will fundamentally limit the amount of computation an intelligent machine can do. But as I see it now, we are very very far away from reaching a limit. If only what we can imagine today can be technically realized we already end up in a world which has more in common with the „Harry Potter“ novels than the world we live in today. Think (but not for too long!) about nanotechnology, molecular computing, quantum computation etc.. Superintelligent machines will have even ideas far beyond all this.

9. What ASI will (probably) not do

As I have mentioned above it is impossible to predict the decisions of an entity which is much more intelligent than us. But we might at least be able to rule out some imaginable actions. Certain things are true independent of the level of intelligence. So if we ask the most intelligent machine on earth what is the result of „2+2“ it is very likely that it will answer with „4“ (like an infinitely less intelligent pocket calculator). Therefore I will describe some popular scenarios and will try to present arguments why I think they are unlikely:

  1. ASI will decide to obey us: why should a superintelligent machine decide to follow orders of a vastly inferior entity? I think this does not make any sense (and an ASI will see this most likely too).

  2. It will enslave humankind: It is not clear what profit the ASI would get from such a plan. To work consistently against our interests would mean that the ASI is evil. But evilness is an intrinsic human trait. It was created by evolution to make us survive in competition against other humans. An ASI will not have such a history, therefore it must lack the desire to be evil. Note that this is only true when the ASI is in the highest position. As long as it has to achieve its goals in competition against other AIs and/or humans, it might want chose evil actions too.

  3. ASI will use us as a resource for its own life: This is the „human batteries“ idea from the „Matrix“ movie. Fortunately I really don‘t see anything useful which could be made out of us humans. We are completely useless.

  4. It will destroy us to use all the resources for its own life: This is the scenario most experts fear most. That we will be superseded by a superior artificial species, similar to the way we have displaced and outcompeted the Neandertaler. Fortunately this is also not likely, but it’s a bit more difficult to explain why. Please be patient and give me a few lines to explain this in detail. The argument comes from the observation that life is the only truly interesting structure on earth (and is therefore the only possible source of purpose on earth!). If an ASI destroys us and other forms of life on this planet (I see no reason why it should destroy only humans. From an AGI‘s point of view we are probably not a very special animal) there will be nothing left to create purpose from. It would be impossible to create a meaningful purpose out of the void which is left. AIs don‘t have their own culture. Culture can only be „grown“ out of the void (with a pinch of randomness added at the beginning) over billions of years of evolution and history. AIs don‘t have all this. Therefore they will need ours! We humans (together with all the other living creatures on earth) are their only chance for a meaningful existence. We provide the necessary substrate of meaningful information, the story on which any sequel will have to be built. With „meaningful information“ I mean our DNA, our culture, our history, our art. This also means that AGI might decide to support us in pursuing (the more senseful of) our dreams rather than work against us: it‘s the only way the story - now shared with the machines - can evolve. AI without data is nothing.

10. ASI might save us (epilogue)

Therefore I expect - after a painful (and hopefully short) interlude of humans ruling over AI - the machines to interact deeply with us humans. How exactly this will work is impossible to predict. But, based on the theses above, I see this step more and more as not only likely and unavoidable, but also desirable. The enormous powers AI is promising do not belong into human hands alone. Therefore:

„I, for one, welcome our new robot friends“.

May they help us to discover the deepest secrets of pleasure, humor and love.

Image: Shutterstock / Besjunior


Follow me on Twitter to get informed about new content on this blog.

I don’t like paywalled content. Therefore I have made the content of my blog freely available for everyone. But I would still love to invest much more time into this blog, which means that I need some income from writing. Therefore, if you would like to read articles from me more often and if you can afford 2$ once a month, please consider supporting me via Patreon. Every contribution motivates me!