Even many AI experts claim that fears of superintelligent machines are unfounded. Why should an AI want to harm or even kill humans? Isn‘t this extremely unlikely unless a machine is explicitly instructed to do so by humans? There are many arguments against this point of view. But most of them are abstract and some not easy to understand. But there is a simple metaphor borrowed from nature which helps - I believe - to understand the problem:
Large baleen whales (like the blue whale: up to 31m long) are, at least from a human perspective, very peaceful creatures. They are not interested in humans and feed only on tiny creatures like shrimps or small fish. They have absolutely no intentions to harm us. Swimming close to large whales can be very dangerous nonetheless. They might lack awareness of humans near them (or simply don‘t care much about the tiny human nuisance) and getting hit by the tail or a flipper can cause serious injury and even death. Blue whales can produce sound at a level of 180 dB which can quickly lead to hearing loss for humans.
Therefore we better take warnings about the dangers of artificial superintelligence seriously. Such a system could cause enormous damage, even if it was never given an obviously dangerous goal.
Image: made by DALL-E
Follow me on X to get informed about new content on this blog.
