In latest Stephen Hawking research, which is his book, Brief Answers to the Big Questions, has cautioned the world about deadly threats of Artificial Intelligence to the human race.

Hawking was a world-famous physicist and cosmologist, known for his rich contribution in the field of contributions to the fields of cosmology, general relativity and quantum gravity. Emphasizing on the potential threats of the AI on humanity, he talks about Superhumans ousting the regular humans.

stephen hawking research

Superhumans here, means “genetically re-engineered” humans.  He opined that that people would use genetic engineering to modify something as crucial as intelligence that too in a very short span of time. Eventually, superhuman will be a threat to the ordinary human world. Stephen Hawking further explained that humans, who face limitation of slow biological evolution, wouldn’t be able to compete and would become archaic.

Stephen Hawking research paper also underlines far deadly impacts of the Artificial Intelligence. He stated that in the next 1000 years, AI will have a will of their own which will contradict that of human beings. Using their own will AI will the cause of major environmental calamities or nuclear war that will destroy the earth. Hawking’s book clearly states that AI will learn, grow and eventually outdo the human abilities in the next 100 years.

He said that the technology will surpass humans and enable themselves to not just do what humans are capable of, but also learn and grow, eventually surpassing humans in their abilities. He stated in the book that AI will overtake humans in intelligence during the next 100 years.

Not just Stephen Hawking research but also Bill Gates and Elon Musk, founder of Tesla, had shared his thoughts on the potential threats of AI.  Seth Shostak, the senior astronomer at SETI says that AI will take over humans as the most intelligent entities on the planet.

The current generation of AI will just follow human instruction; however, by the third generation, they will have their own thought process and do according to their free will. Seth Shostak, further predicts that gradually a day will come when human beings will simply become irrelevant to these hyper-intelligent machines.

Current situation of Artificial intelligence

Stephen Hawking research says Artificial Intelligence is essentially software built to learn or solve a problem solve- in fact most of these process are typically performed in the human brain.

Some of the other areas in which Artificial Intelligence is used are:

  • Voice assistants like Amazon’s Alexa, Google Assistant, Apple’s Siri , and Tesla’s Autopilot, are all backed by AI.
  • Automation in factories and offices are transforming the way we work as machines and software are replacing human.
  • There are self-driving carswhich will soon make driving a thing of the past.
  • Retail shops and departmental stores backed by AI are changing the entire experience of the shoppers.

Potential threats of AI to the world:

The rampant use of AI in every sphere of the world has changed the way people live and think. In fact, everyone is so spell bound by the wonders of AI that they are adopting it in at increasing phase. This may lead to overdependence on AI. According to the Stephen Hawking research soon, AI will dethrone the all-powerful human species and be a dominant factor.

  • Humans are reduced to just being a data point:We are now allowing machine algorithms to replace human decisions, decision making of the governments, the banks, educational institutions etc. Algorithms decide the right candidate for job, city development, who should get admission in college, and if the crime is proved, what would be their sentence.

These decisions are based on the digital footprints left by the individuals. All the data that is found from these sources are mapped together to form the base of the decision-making process. Consequently, every human being is just reduced to a simple number. There is less room for objectivity in decision making even when the situation demands.

  • Misaligned goals:Since AI has the potential to surpass humans in the area of intelligence, there is no concrete way of predicting how it will behave. Even historical trends cannot be used to predict the pattern because nothing so advanced has ever been created by man in the past. With the passage of time AI will become more competent but with goals misaligned from ours.
  • AI weapons can cause devastation:Autonomous weapons are artificial intelligence systems that have the ability to kill. If it happens to land in the hands of a wrong person, it can cause mass devastation. In fact AI weapons can also be used to launch an automated war which destroy the entire mankind. These weapons are extremely advanced and deactivating them can be quite challenging for a human being. Hazardous weapons which are beyond the control of a human being can wipe out the entire world.
  • The AI might choose destructive method to achieve its goals:The method in which AI functions can be diametrically opposed to what we want even though it follows our instruction. It may get the task done but it will not use any judgement to differentiate between the wrong and right. It will not use discretion to choose the most appropriate way to achieve the task.

Working on AI safety is paramount now

AI alone will soon emerge as a superpower of the world. According to a paper from the McKinsey Global Institute Study, between $8 billion and $12 billion was invested in the development of AI worldwide, in 2016 alone Analysts with Goldstein Research predicts that, by 2023, AI will be a $14 billion industry.

Every technology has two sides-good and bad. It depends on how humans use or misuse this technology. Whether to make it a threat or a strength is up to us. AI has a world of its own where the future is unpredictable. We don’t know whether AI will transform humanity for good or completely destroy it.

Musk, Hawking or Bill Gates do not believe that developers should stop leveraging AI, but they unanimously suggest that there should be government regulation to ensure that AI is not created to cause destruction to mankind. Taking measures for AI safety is a must from the present moment itself.

Experts suggest that that human-level AI will take another 100 years, but most AI researches at the Puerto Rico Conference in 2015 guessed that it would happen as early as 2060. Stephen Hawking research on AI safety measures will take decades to be completed, therefore it is prudent that the technology developers, government and businesses come together to formulate its norms from this present age itself.