The question of morality has always been an interesting one as it pertains to technology. In fact, it is a common theme of science fiction, as writers consider what will happen as robots and computers develop more autonomy. Way back in 1942, Isaac Asimov famously produced the Three Laws of Robotics, to govern the behavior of autonomous robots. They are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
It seems that science fiction has entered the realm of reality. Notice the following quote from an article written by Patrick Tucker, written for the defenseone.com website:
The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.
What is interesting about the story is the argument raging about the appropriateness of the goal. Some Artificial Intelligence researchers are applauding the grant. They note that since the military has increased its use of technology (drones, missile defense systems, autonomous vehicles) that there is a real need to develop systems with the ability to make moral judgments. Others feel that such an effort is simply not feasible.
Noel Sharkey, one such AI expert, is among the detractors. Notice this quote from the same article:
“I do not think that they will end up with a moral or ethical robot,” Sharkey told Defense One. “For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won’t really care. It will follow a human designer’s idea of ethics.”
Consider the irony. Many of these researchers accept the doctrine of evolution as the explanation of both man’s origin and his capabilities. Even those who believe they can create “moral machines” recognize the enormity and difficulty of the task. While man has “moral agency”, the best that a robot or computer can attain is a simulation of the trait.
Despite this, the evolutionist maintains that man’s “moral agency” has been developed without any intelligent direction. That it is simply a byproduct of evolutionary development. This is one example among many that trumpets the problems that accompany evolutionary theory. There are so many natural things that are claimed to have “just happened”, and yet the directed knowledge and talent of men are capable only of simplistic simulations of the reality of the natural world.
It is a simple, but devastatingly strong argument — design necessitates a designer. As the Psalmist wrote, “I will praise You, for I am fearfully and wonderfully made; Marvelous are Your works, And that my soul knows very well” (Psalm 139:14).