Is AI like any other human-created tool or is it something that goes beyond our control? Many of our tools have both good and bad impacts across a range of possible outcomes. The big difference is control. Take nuclear weapons. They have the potential for incredible harm. But they have remained under human control and, with two exceptions, we have chosen not to use them to use them in a war.
Will we be able to retain control of AI? That is the fundamental question here. If AI advances to the point where it can advance itself without our input, and it expands its ability to cause actions i the “real” world the answer is probably no.
This is an interesting examination of how this could happen, and how quickly.