As we steadily inch closer towards a world where robots and artificial intelligence (AI) become a common sight, one concern keeps cropping up over and over again: Can machines rise against humans someday and take over the world?
The question might appear or fantastical at first, but you’ll find that some of the smartest people of our time have voiced serious concerns over the topic. From the late physicist Stephen Hawking to tech-billionaires Bill Gates and Elon Musk, these visionaries have repeatedly addressed the issue of ethics in AI and the implications it could have on our future.
The idea of machines going rogue is not a new one. For many years now, films and books have imagined a futuristic time when machines have taken over the world. In Arthur C Clarke’s iconic sci-fi novel 2001: A Space Odyssey, the AI-powered robot HAL 9000 takes the fate of the humans on board the spaceship into its own hands. In the famous Terminator series of films, Arnold Schwarzenneger’s character is sent back in time to prevent the rise of the machines and save humanity. The Matrix Trilogy also explores similar themes where machines harvest humans for energy and everyone is forced to live inside a simulation. So, are these devious depictions of evil robots just the result of the imagination running wild or is it an actual possible reality that we may have to deal with in the future?
To find the answer, we may have to step away from the field of science and enter the realm of science fiction or ‘sci-fi’. One of the most famous writers of science fiction, an American author by the name of Isaac Asimov, first penned down the 3 Laws of Robotics. By doing so, he brought the ethics surrounding artificial intelligence into popular consciousness: If artificial intelligence surpasses human intelligence someday, what stops it from overpowering humanity and acting out of self-interest?
Isaac Asimov’s 3 laws of robotics are as follows:
The first mention of the three laws is in Asimov’s short story Runaround, written in 1942, much before the era of automation and robotics that we are used to now. Asimov was a visionary who conjured up fantastic futuristic worlds where robots and humans co-existed and worked together. He created the three laws to ensure smooth cooperation between robots and humans in his sci-fi stories. Throughout Asimov’s short stories and novels, he made small modifications to the laws to define the relationship between humans and AI in each story. In his later works, once he started exploring complex worlds where robots had taken over governments and of entire planets and civilizations, Asimov’s created a ‘zeroth’ law to precede his other three laws. It said:
“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
Artificial Intelligence and the Three Laws Today
In its current form, artificial intelligence is far from attaining the complex levels of processing power that Asimov describes. Today, the tangible forms of AI that we can see in everyday life are limited to automated vacuum cleaners like the Roomba and digital assistants like Siri and Alexa. Today, all engineers can do to make sure a Roomba doesn’t cause harm to humans is by attaching bumpers and sensors. If a Roomba accidentally snags onto someone’s foot, there is no mechanism by which it can understand if a human is in pain or discomfort.
But this doesn’t mean that it will always remain so. Technology is advancing rapidly in all fields today. Look at cars, for instance. Just in the last 100 years, we have gone from the crudeness of the first mass-produced car to the highly sophisticated computer systems of electric-powered self-driven cars. In such a setting, the day might not be too far where there is a need for robots with Asimov’s 3 Laws pre-programmed into them.
Asimov’s three laws have successfully made their way from science fiction to actual science. Engineers, scientists and philosophers today are constantly grappling with the all-important question of ethics in artificial intelligence. New fields of interest like ‘Robot rights’, liability for self-driving cars and weaponization of artificial intelligence are taking shape across the world. For now, the application of Asimov’s 3 Laws remains a field of speculation, where we can only intelligently predict what challenges we might face and how we can deal with them. What is really exciting is that in your lifetime, you could see it become a reality!
Enjoyed reading this article? For more on robotics and the technology of the future, check out these interesting reads on the Learning Tree Blog:
When Sci-fi becomes science – from fiction to reality
Xenobots – the world’s first living robots
A museum by robots, of robots, for people!
1. How many laws of robotics do we have according to Isaac Asimov?
Isaac Asimov came up with three laws of robotics. They wre first introduced in the short story Runaround published in 1942. He later introduced a fourth law called the Zeroth Law of Robotics.
2. Which law of Asimov is for the well being of robots?
The third law deals with the well being of robots as long as it doesn't conflict with the well-being of humans. It states that "A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”
3. What degree did Isaac Asimov have?
Isaac Asimov completed his Master of Arts degree in Chemistry in 1941. He then secured a PhD in Chemistry in 1948. He went on to teach biochemistry at the Boston University School of Medicine.
4. What are Asimov's three laws of robotics?
Asimov's three laws of robotics are a set of rules for robots to follow so that they may co-exist peacefully with humans. They are as follows: First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
5. What is Isaac Asimov's Zeroth Law of Robotics?
Asimov later added a fourth law called the Zeroth Law. It states that "a robot may not harm humanity, or, by inaction, allow humanity to come to harm."
Comments
Abhinav Kumar
September 16, 2020
Very informative?. Thank you.
Parva Shah
September 16, 2020
If the robots follows the three rule then it would be much helpful to us in many work like if it learns doing things like being chef helping in factories or helping in house hold work etc
Amogh Kottada
September 14, 2020
Never imagined.
ABHISAR SINGH ⚡
September 13, 2020
In some years there will be self-driving cars and I am very exciting to see them !
Dhairya Chandak
September 12, 2020
???
Dhairya Chandak
September 12, 2020
Great information. But if somone hacked the system of a robot and then remove the 3 laws and instead put a program in the robot that commands the robot to destroy humans
Tiana chopra
September 11, 2020
That is informative !