The theoretical physicist identified artificial intelligence (AI), nuclear war and genetically-engineered viruses as just some of the man-made problems that pose an imminent threat to humanity.And the 74-year-old said that as we rapidly advance in these fields, there will be “new ways things can go wrong”.
We are at a point in history where we are “trapped” by our own advances, with humanity increasingly at risk from man-made threats but without technology sophisticated enough to escape from Earth in the event of a cataclysm.
He warned: “Although the chance of a disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next thousand or ten thousand years.”By that time we should have spread out into space, and to other stars, so a disaster on Earth would not mean the end of the human race.
Hawking has touted nuclear war as one of the things that might kill us
“However, we will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period.”He added that humans do have a knack of “saving the day” just in time, and urged fellow scientists to continue trying to make advances in their respective fields.Prof Hawking said: “We are not going to stop making progress, or reverse it, so we have to recognise the dangers and control them. I’m an optimist, and I believe we can.
Hawking is still optimistic
“It’s important to ensure that these changes are heading in the right directions.“In a democratic society, this means that everyone needs to have a basic understanding of science to make informed decisions about the future.”So communicate plainly what you are trying to do in science, and who knows, you might even end up understanding it yourself.”
Hawking’s comments come just months after he warned the human race could be wiped out by Terminator-style killer robots after research into “autonomous weapons”.He was among more than 1,000 leading scientists and businessmen to sign an open letter from the Future of Life Institute. The letter, presented at a conference in Argentina, suggested the ability to create autonomous weapons that think for themselves is “feasible within years”.It said: “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable.”Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group. A military AI arms race would not be beneficial for humanity.”