Karla News

The Debate Over Artificial Intelligence

The progress of computer technology over the last century has spawned a lot of concerns, even panic, amongst people. Innovations seem to outstripping our own ability to gauge their potential consequences; there is a growing sense that, rather than being in control of them, we are merely being swept along for the ride. Genetic engineering has given many people a disturbing image of scientists playing God. Developments in Artificial Intelligence seem to suggest that the future may not even belong to us at all.

To put A.I. in perspective, we would need to trace the evolution of scientific thought – and the technological advancements that accompanied it – over the course of the twentieth century and into the new millennium. Both computer science and artificial intelligence are outgrowths of twentieth century mathematics. A new field of mathematics, dubbed computation, was born in 1936. This was a mechanical method for carrying out any arithmetic function and set the groundwork for calculators and, later, the first computers. Popular at this time, also, were scientific theories that compared the workings of the human body and mind to electromagnetics and mechanical procedures. Basically, the vision of A.I. began not as an attempt to create machines with all the quirkiness of human nature, but rather to define human nature as little more than a complex network of biological (read mechanical) processes. If a human being is only the result of chemical reactions and nerve transmissions, the reasoning went, then anything a human being is capable of could be accomplished by duplicating those processes with electronic systems.

See also  Tips for Treating an Incontinent Cat

Some science fiction writers foresaw these developments: not only the technology, but also where such reasoning could take us. Isaac Asimov’s Foundation series, for example, depicted a future where the Laws of Robotics were needed to keep artificial intelligence from overrunning humanity. Frank Herbert’s Dune was built upon an event in history known as the Butlerian Jihad, wherein there was an uprising against computers, the “thinking machines”. “Thou shalt not make a machine in the likeness of man” became the overriding law for future generations to follow, after the success of the Jihad.

One of the most disturbing implications of humanity’s technological progress, as noted by many scientists and engineers over the last several decades, has been the idea that this technology might be able to evolve itself independently of man. If a computer program could be adapted to respond to uncertainty – for example, in the context of a chess game, might it not find other ways to manipulate its environment? And if a computer can defeat a world chess champion, as IBM’s supercomputer Deep Blue did in 1997, might this be a sign that artificial intelligence might already be outstripping its creators?

Since the beginning, however, there have been many noted thinkers who’ve been skeptical of A.I., or whose discoveries undermined the principles that it’s based on. Einstein’s general theory of relativity and Kurt Godel’s incompleteness theorem, for example, paint a picture of the universe that is much less predictable and mechanical that the one A.I. proponents believe in. It can be argued that developments in computer technology only pose a threat to those who believe that human beings are nothing more than computers clothed in flesh themselves, and thus easily made redundant and replaced.