Cylons Shouldn’t Sweat


In 1950, Alan Turing sought to answer the question “can machines think?” He designed a test that involved a computer having a typed conversation with a human. If a third party couldn’t consistently tell the computer from the human, the computer “won,” and passed the thinking test.

Now, Turing’s test is obsolete. Robots have virtually immeasurable knowledge or “intelligence”–Watson, for example, mopped the floor with human Jeopardy champs and conducted himself appropriately and consistently with its human competitors.

The more relevant question now is, can machines feel?

In Battlestar Galactica, cylons–particularly number 6 and Sharon Agathon/Boomer–fall in love with humans (and humans fall in love with them). Cylons feel fear, jealousy, anxiety, grief, and devotion, as well as love. Some humans, such as Starbuck, have trouble wrapping their brains around the fact that the cylons are no different from them.

The concept of robots being indistinguishable from humans pops up in Bradbury, Asimov, Philip K. Dick, and many others, and it’ll keep recurring because robots will become increasingly integrated into our lives.

There are a host of questions surrounding robots’ ability to feel emotions–are robots actually sentient? Does an appropriate expression of an emotion equate to having the feeling itself? And the big philosophical can of worms–is it possible for a robot to have a soul?

In order for artificial intelligence to truly feel anything, it first has to be self-aware–a process robots have already begun. Robots whose “brains” have artificial nerve cell groups that encourage image recognition and cognition have successfully identified themselves in a mirror. They’ve also been able to distinguish between themselves and other similar robots. Robots also identify themselves as moving when they perform certain actions while looking in a mirror.

Much as it is with human babies, such self-awareness is a crucial developmental step to learning and developing intelligence, behavior, and feelings. Practically, self-recognition paves the way for the cognitive and interactive functioning necessary for robots to become teachers. Body recognition allows robots to adjust and interact more effectively with their physical surroundings.

At Cornell University’s Computational Synthesis Laboratory, scientists made a robot that looks like a starfish. The scientists programmed the robot with an inventory of its own body parts, but not an understanding of how its body worked. When the scientists directed the robot to move, the robot used trial and error to figure out how its body parts worked together, and to discern the most effective (though strange-looking) way to move. The researchers then removed a leg from the robot and eventually it re-learned how to move.

Numerous positive and negative reinforcement experiments are currently being conducted to test robots’ ability to learn and adapt their behavior. So far, robots have proven able to learn and adapt at a faster rate than animals or humans.

Eventually, these tests will attempt to link robots to other beings, human or robot. Once a robot has adapted its own behavior, it can theoretically begin to identify the origin of someone/something else’s behavior, make predictions, and possibly influence adaptation in someone or something else. Essentially, a robot could learn how to manipulate.

Could a robot put on a show, as a human does? Could we tell the difference between a robot actually feeling and it simply acting as if it’s feeling?

Hanson Robotics, whose mission includes “awaken[ing] intelligent robotic beings” and “grant[ing] them sparks of true consciousness and creativity,” has teamed up with UC San Diego’s Institute for Telecommunications and Information Technology and invented a robot designed to interact naturally with humans.

The Einstein robot, which does indeed look like Albert Einstein with wild white hair and a wide smile, perceives the emotions of whoever it interacts with. With the help of 31 motors working together, Einstein can smile or display other emotions, such as worry, confusion, and interest, which involve furrowing brows, widening or squinting eyes, and changing the shape of the mouth. Einstein largely mimics what it perceives in the humans it interacts with. Armed with facial recognition software, which is currently being used to help children with autism recognize and return appropriate emotional and social expressions, Einstein picks up on cues that suggest age, gender, and emotional state, and then mimics reactions such as nods, smiles, and other expressions of emotion.

The Einstein robot mimics, rather than generates genuine feeling. But it will become more and more difficult to distinguish between the two. Hanson Robotics, as well as countless other scientists, is attempting to build a complete brain for a robot, which includes the ability not only to display, but to actually feel emotions. “It’s very important that we develop empathic machines, machines that have compassion, machines that understand what you’re feeling,” argues David Hanson. It seems unavoidable that robots will become as intelligent as humans, and imbuing robots with empathy and emotion would increase the chances of robots using their capabilities for good. Robots that can express emotions are currently practicing feelings they may soon experience.

“In a way, we’re planting the seeds for the survival of humanity,” Hanson says. Who knows-maybe he’s right. But I suspect that Hanson’s watched and read more science fiction than I have, in which case he knows how often robots’ self-awareness leads to their hatred of humans and fuels impressive emotional capabilities indeed–usually in the form of anger and desire for revenge. It seems unlikely that we’ll avoid that end by refraining from creating robots to serve us, so perhaps we humans could benefit from a little extra programming when it comes to compassion while we’re at it.

This entry was posted in Could this Happen? and tagged , , , , , , , , , , , , , , . Bookmark the permalink.