Strictly speaking, according to Bostrom, the kind of machine-based intelligence that is heading humanity’s way wouldn’t wish its makers harm. But it would be so intent on its own goals that it could end up crushing mankind without a thought, like a human stepping carelessly on an ant. This is where the nightmare scenarios come into play. Once they soar past the intellects of their creators, machines are likely to reach their own conclusions about how best to achieve the goals programmed into them.
…Even the pessimists, however, say they are prepared to consider a happier outcome. Bostrom, for his part, says that there’s a chance things could turn out very well indeed. Aided by their brilliant machines, humans could quickly colonize space, cure ageing and upload their minds into computers – it’s just a case of getting past the dangerous moment of the intelligence explosion.
“If we can make it to the next century and achieve technological maturity, we could have another billion years,” he says.
Richard Waters, article on Artificial Intelligence, FT Magazine, November 1st, 2014
This one goes under “ideas that don’t add up”. It’s here as a kind of photographic negative, a reminder of how stupid, misguided and out-of-touch I am for failing to get with the programme.
Bostrom’s book Superintelligence, according to the article, warns of the dangers of AI. There’s a singularity coming. Digital “intelligence” is racing so far ahead of our capacity to keep pace that the machines will soon enough attain a state that looks a lot like consciousness. At which point, the theory goes, they can muck about with their biological boot loaders (aka us) at will. That singularity is due in the next two or three decades.
OK. Technically plausible. Assuming the electricity stays on. Assuming the supply lines of semiconductor raw materials and rare earth metals from distant corners of the globe remain open. Assuming industrial civilization continues on its merry way. Better safe than sorry so, yes, remind those AI boys and girls to kindly code-in the necessary precautions while there’s time.
But where I just can’t get with the programme, where Bostrom and his ilk lose me — frankly, where I lose it altogether! — is this blithe stuff about colonizing space that we keep bumping into. I mean, children, please! We do actually have a planet to live on and we can be 99.99999% sure it is the best available one for us in the entire span of touchable universe. It could well be the perfect home, even in the wreckage after the war against the biosphere ends…if we can just manage our own biological programming a bit better.
As for “curing ageing”. Haven’t proponents of that idea noticed: getting old and dying is, like, a really really important part of the natural cycle? There’ll be no new life if us older aggregations of organic matter don’t decay, fall prey to predatory beasts and microbes, stop competing for resources with the young and generally just get the hell out of Dodge!
Which, when you think about it, prompts the perfect problem for digital superminds to apply themselves to once they pop out on the other side of that singularity: retirement communities on Pluto. Free trips there for die-hard crumblies. One-way tickets, naturally…