Friday, December 21, 2012

Apocalypse now or later?

If you're reading this, the world hasn't ended.

But, that doesn't mean we should not consider the possibility.

Recently, a philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose “extinction-level” risks to our species.

Here is more, taken from the University of Cambridge website:


“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” says Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders, speaking about the possible impact of Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as we call it today.
“Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”
Price’s interest in AGI risk stems from a chance meeting with Jaan Tallinn, a former software engineer who was one of the founders of Skype, which – like Google and Facebook – has become a digital cornerstone. In recent years Tallinn has become an evangelist for the serious discussion of ethical and safety aspects of AI and AGI, and Price was intrigued by his view:
“He (Tallinn) said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease. I was intrigued that someone with his feet so firmly on the ground in the industry should see it as such a serious issue, and impressed by his commitment to do something about it.”
We Homo sapiens have, for Tallinn, become optimised – in the sense that we now control the future, having grabbed the reins from 4 billion years of natural evolution. Our technological progress has by and large replaced evolution as the dominant, future-shaping force.
We move faster, live longer, and can destroy at a ferocious rate. And we use our technology to do it. AI geared to specific tasks continues its rapid development – from financial trading to face recognition – and the power of computing chips doubles every two years in accordance with Moore’s law, as set out by Intel founder Gordon Moore in the same year that Good predicted the ultra-intelligence machine.

No comments: