The Teenage Robot

This week I was a featured speaker at the University Transportation Research Center (UTRC) conference in SUNY Polytechnic Institute. I am a strong proponent for autonomous mobility and with my 18 year old on the cusp of getting a license it is not happening fast enough. Teenagers have their own psyche and the word “no” becomes a focal point in their development.

Researchers Gordon Briggs and Matthias Scheutz of Tufts University are now working on a mechanism to teach robots to say, “no” to their human masters. The system is one that allows a robot to not only understand the language of a command, but the larger context — whether the robot is actually capable of executing it.  One may ask, is this a good thing?  To answer that, cue up the video:

This ability, while limiting own own control of the situation, is a necessary step to loosening the reins as we move forward into an autonomous future. Self-driving cars will be expected to make ethical judgement calls that cause the least damage when human obstacles block their path illegally. For example, does the car turn into the bicycle rider to avoid the mother and child jaywalking?

According to Google’s statement about their own car, “liability is a major ethical issue surrounding autonomous vehicles. Complex systems inherently have errors and bugs and Google’s self-driving car is not immune to software failure. An ethical issue that will arise surrounding liability is assigning fault when an autonomous vehicle crashes. The only instance of the Google car crashing was attributed to human error in another car hitting the Google car.”

google accident

However, as autonomous vehicles become more prevalent a system of responsibility must be established. If the software misinterprets a worn down sign does the blame fall on the department of transportation for poorly maintained signage or the company who produced the self-driving software? It is unclear where the future of liability will rest in the realm of self-driving cars however it is known that the United States is fast to place blame on car manufacturers. In 1992 Ford was hit with over 1,000 product liability suits in the United States and only 1 suit in Europe. The precedent set over the next few years will have a significant impact on how willing car companies will be in pursuing autonomous vehicle technology.

At an overall level self-driving cars seem to create an environment where society is better off as a whole. The creators of the Google self-driving car have the goal of saving millions of lives by eliminating automobile related accidents in the United States and eventually the World. The intent and final end product of less automobile related deaths would be accepted in both a Deontological and Utilitarian framework because the intent is to save millions of lives and the end result is the elimination car accidents. However these philosophical frameworks could diverge in their agreement at a lower level of examination. Consider the difference between a computer operated car versus a human operated car. If a crash is about to occur humans will almost always have a virtuous intention of avoiding the crash even if the crash is not avoided. Utilitarians would still likely favor the autonomous car at this level because the self-driving car will likely out preform the driver in avoiding the crash all together. While a Deontologist may struggle with the idea of a computer have a “good will” when acting in avoiding the crash. When a car must choose between killing a pedestrian or the driver will the act be with good intention or simply a process executed and arbitrarily carried out. A Deontologist, however, might still favor autonomous cars because the choosing to use a safer self-driving car in the first place could override the decisions made by the car’s technology.

Regardless of which ethical philosophy is used when decided whether or not society is better off as a whole the proliferation of autonomous vehicles will depend on convincing the public that self-driving cars are significantly safer than manually operating a car. People tend to want control over avoiding accidents or bodily harm, it is unclear how willing drivers will be willing to give up their control in favor of safety and convenience. Many drivers may take a slightly increased chance of accident in exchange for maintaining their ability to avoid accident.

Now the researchers at Tufts, have set a series of conditions, known as felicity conditions in order for the robot to accept the proposed action and decide the best path forward, which could be a first step to solving this ethical dilemma. In their documents the researchers write:

  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibility: Does it violate any normative principle to do X?

Yes, the ethical debates surrounding robots in mainstream society is still a heated one. It’s an area Jerry Kaplan talks about quite a bit, questioning how our laws will adapt to punish wrong-doing robots. He says humans are “going to need new kinds of laws that deal with the consequences of well-intentioned autonomous actions that robots take.”

It’s basically an unavoidable consequence of life to hand over the keys to your teenager (or eventually your robot) as we really have no control over the future…

2 thoughts on “The Teenage Robot

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from CHRONICLING THE ROBOT INDUSTRY

Subscribe now to keep reading and get access to the full archive.

Continue reading