Ah, so many good topics in this thread already. I'll add some of my thoughts, but I have to preface this post is not exactly structured as well as I would have liked.
I agree with the sentiment that technological progress seems to be generally exponential rather than linear, and I see no obvious limitations to why that shouldn't continue on. Humans might well be near the peak of general intelligence on the Earth right now, and I assume that "lesser" creatures, like earthworms, cannot even conceive of our level of thought. Similarly, we may yet just be another point on the continuous scale of intelligence, and there are likely many levels of "superintelligence" above ours that we simply can't even conceive of either. But we could possibly create a machine, that creates a machine, etc., that could reach said levels, and it could take less time than we think.
Electronic circuits and the software that runs on them aren't yet as complex as the human brain, but electronic circuits can operate thousands of times faster than neurons, which suggests that if they do reach that level of complexity, they will likely outpace our ability to think by many magnitudes. This is already readily apparent, by considering how fast a simple laptop (or even a mobile phone now) can compute the millions of calculations per second required to render a 3D game or other complex program, and nearly all without a single error. That is an example of specific intelligence, which is largely all that computers are capable of today. But the specific intelligences that computers already possess, such as driving cars autonomously, they do in vast superiority to any human, or even groups of many humans. Turning a powerful AI on a problem for a week might be the equivalent of 10,000 years of human research. It may well be some sort of evolution, from a machine possessing multiple specific intelligences, that is able to leap to general intelligence, much as brains likely evolved from specific intelligences in the past. There is a phrase I've heard somewhere: "When does a general AI become dangerous? The moment you switch it on."
Will a hyperintelligence be "friendly" to its creators? Maybe. It could take on a fond relationship of "preservation", sort of like we try with National Parks or something, but there's really no telling. A disquieting experiment by Google's Deep Mind team recently seems to indicate that when competition (rather than cooperation) with other individuals yields the best results in a game, more complex computer models develop more aggressive strategies (
http://www.wired.co.uk/article/artifici ... t-deepmind). Perhaps the best tactic is to make sure you are useful and non-competing to future robot overlords, but what qualities that will require of us is also hard to say. It has proven in the past to be a fairly "successful" genetic strategy to be useful to humans: we will make sure your kind are populous (cattle), because we like to eat beef. But it has been equally disastrous to be a nuisance to humans.
EDIT:
Another topic touched on is this thread already, was that of what to do with the people who are no longer strictly needed to do jobs. We see this already in some limited geographies and sectors, but it isn't necessarily just low-skill repetitive jobs that are at risk. Given the right kinds and amount of training data, computers are probably better at diagnosing skin cancer on sight than humans (
http://www.cnn.com/2017/01/26/health/ai ... cer-study/ - and that's just visible-frequency images, computers could make use of hyper-spectral imaging as well). And other skills that humans need years of training to acquire are still possible to design a specific intelligence for (
https://www.theguardian.com/technology/ ... ollar-jobs). It is quite easy to see a point at which machines are capable of growing all the food to support everyone, keeping the streets and sewers clean, and providing everything else that humans strictly "need" to survive. Perhaps artists will remain? Actually, computers have shown remarkable ability to create music, novels, and pictures that, while being somewhat formulaic, are generally quite pleasing still. So what do we do when we reach a post-job state, and humanity starts to just look like a burdensome thing to support? I don't know any of the answers to that, but I do worry that we are having to deal with aspects of it already, and it is only likely to gain magnitude over time.
I kind of think that a solution (we might call the "luddite" solution) might be to impose extreme limitations on the kinds of jobs and intelligence that computers are allowed to have, and restrict the development of higher levels of AI. It would require a realization by mankind that developing a powerful general AI would be "a grave and dangerous mistake", akin to how using nuclear weapons is viewed under MAD today. I don't see that solution as having must lasting power though, and the danger might be that you only ever get to test one "AI bomb". Another obvious problem with that approach would be that, while the expertise on developing general AI currently doesn't exist and will probably still be rare even in the future, the computing power necessary to make them a reality probably isn't that expensive, and self-centered humans will always want the upper hand somewhere.