A Few Words from Sara Shepherd, Screaming Circuits Contributing Author…
“Robot” was first publicly used by Karel Čapek in the January 25, 1921 debut of his play play “R.U.R.”, or “Rossum’s Universal Robots.” January is not only the month the play premiered, but Karel Čapek was born on January 9, 1890. In Screaming Circuits world, January is Rossum’s Universal Robots month!
With this January marking the 100th anniversary of the word robot, let’s take some time to examine the origins of the word, as well as the significance it still carries today. When is a robot a robot, instead of just another machine? Are robots a force for good, or are we creating our own doom, as so many sci fi movies would have us believe? A century after the word’s creation, the hopes and fears generated by robots continue to be as important as ever.
The concept of robotics has always been closely linked to the hope and paranoia induced by the labor revolutions brought about by industrialization in the 20th century. 100 years ago, the term robot was first introduced to the English language by way of a Czech stage play. On January 25, 1921, the play R.U.R. or Rossum’s Universal Robots first premiered at Prague’s National Theatre. Written by the brilliant novelist, journalist, and Czech playwright Karel Čapek (1880-1938), the play follows the invention and mass production of laborers called roboti, translated to the novel term “robots” in the English version.
Čapek created the word roboti with help from his brother. The pair were inspired by the old Slavic word robota, meaning “servitude,” “forced labor” or “drudgery.” That word had its roots from serfdom, the old European system by which a tenant’s rent was paid for in forced labor.
The play’s robots are not made of metal, but are fleshy and “soulless” humanoids created to perform tasks humans no longer wished to do. Rossum’s robots originally lack emotion and are ready to take on all drudgery without complaint. However, occasionally a robot goes haywire and has to be destroyed by factory employees. Eventually, the world becomes dominated by robots, who unionize and issue a robot manifesto.
In a now familiar trope, the roboti grow tired of their forced servitude and rebel, wiping out humanity, including anyone who knows how to make more robots. Faced with their own ironic extinction, the robots are saved in the last act via deus ex machina (a critic’s term for when unforeseen circumstances conveniently solve a story’s conflict) when a male and female robot inexplicably develop emotions and, apparently, the ability to reproduce.
Despite the abrupt ending, the play was a huge success. It premiered on Broadway the next spring and was translated into thirty languages by 1923.
Čapek’s story established the allegory of the not-to-be-trusted machine, following along with many of the salient worries and topics of its day. With the bloodshed of the Russian Revolution in the east and the technological advances of the Roaring Twenties in the west, humanity was caught between the now familiar mix of desire and fear: wanting technology and utopic labor philosophies to make our lives easier, but being unable to bring ourselves to completely trust them, having seen the bloodshed that occurs when the labor class rebels against their masters.
This worry was certainly not new to R.U.R. Handloom weavers burned mills and protested the invention of the sewing machine. The Luddites destroyed industrial equipment during the late 18th century from fear the machines would replace them. Two years after R.U.R. premiered, Elmer Rice explored this theme in his play The Adding Machine, in which an office clerk murders his employer after he is replaced by a robot.
Many authors, philosophers, and engineers endeavored for decades afterward to address this fear. Most famously, perhaps, were “The 3 Laws of Robotics” created by science fiction author Isaac Asimov:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The early story of the word robot is just as relevant today. We continue to see similar efforts to establish behavioral rules in discussions around the ethics of robotic automation and artificial intelligence (AI). AI researcher and writer Eliezer Yudkowsky illustrated the need for establishing “human-friendly” rules for AI succinctly when he said “AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
Fear of violent uprisings aside, robotics presents a very real worry to today’s worker: the fear of being replaced. Today we use “luddite” as a term to deride those who refuse to use technology, but the fear clearly remains outside of fringe circles. We need only look to the popularity of tools such as When Will a Robot Take Your Job.
It’s worth pointing out that the reality may not be so bleak. A study by McKinsey estimates that about 400,000 jobs were lost to automation in U.S. factories from 1990 to 2007. However, at the conclusion of their findings, the authors note that most of the humans displaced by this automation found new work. Few people would claim to enjoy the monotonous, repetitive tasks of assembly line work or argue against machines replacing sweatshops. Few people likely mourn the loss of elevator operators but may feel guilty at the recent layoff of thousands of toll collectors replaced with electronic tolling systems in the wake of coronavirus.
Malcolm Frank, author of What To Do When Machines Do Everything, speaks to this point: ““If you look at the specific jobs affected, you can get depressed, but if you look in broader context, there’s room for optimism.” For example, in his book Frank points to the 1800s, when nearly 80% of U.S. labor was focused on agriculture. Today that number is about 2%, but we do not see a huge increase in unemployment as a result. “What the machine takes away, it also gives back with entirely new industries, entirely new types of jobs,” he says.
It’s worth asking at this point: what is, and is not, a robot?
At this year’s CES, Pollen Robotics debuted their latest iteration of “Reachy,” a robot compatible with a humanoid Virtual Reality (VR) teleoperation app. By wearing a VR headset and using hand controllers, anyone can control Reachy remotely, allowing the robot to complete tasks from anywhere in the world. Reachy is open-source and just one example of the boundless possibilities of where robots can go where humans can’t right now.
However fantastic Reachy is, it isn’t actually a robot. Though the meaning is sometimes slippery, robotic systems are generally defined as interconnected, interactive, cognitive and physical tools that are able to perceive the environment using sensors, reason about events and make plans using algorithms implemented in computer programs, and then perform actions. So a drone you fly with a remote is not a robot, but a drone that autonomously navigates a maze is.
The feedback of autonomous sensing of their environments, perception, cognition, and physical action is what distinguishes the robot from other machines. Even the humble Roomba fits the bill, using its complicated system of sensors to diligently vacuum up cat hair.
So, from a 1921 play on the dangers of capitalism to a robotic labrador puppy created for alzheimer’s patients, the robot has continued to play a significant role in a global conversation about labor. Though many aspects of the play are dated by today’s standards (we won’t even touch on the forced engagement of the female lead), the characters’ disregard for the consequences of new technology is arguably the most poignant warning in R.U.R. This point is made explicitly when one of the play’s idealistic engineers states: “Dividends will be the ruin of humanity.”
What happens when technological advances are used solely for capital, consequences be damned? Nearly all stories about golems, automations, and robots share the same warning as R.U.R.: creation without careful consideration, or creation rooted in greed, will ultimately lead to destruction.
Today, we seem to be at a tipping point between free-for-all technological advances and oversight. More and more countries are pressuring huge tech giants to be held accountable for the results of their creations, from Facebook, to TESLA, to Amazon. While Google may have removed the “don’t be evil” clause from their code of conduct, we take comfort in the growing movement of robot ethics and efforts like The Thoughtful Technology Project.
As we look to the next 100 years of robots, it is our hope that AI creators remember these warnings and do not forget their own humanity in the search for artificial life.