11.25 AM Thursday, 25 April 2024
  • City Fajr Shuruq Duhr Asr Magrib Isha
  • Dubai 04:26 05:44 12:20 15:47 18:50 20:08
25 April 2024

Computers ‘to match human brains’

By Staff Writer


Computer power will match the intelligence of human beings within the next 20 years because of the accelerating speed at which technology is advancing, according to a leading scientific “futurologist”.


There will be 32 times more technical progress during the next 50 years than there was in the entire 20th century, and one of the outcomes is that artificial intelligence could be on a par with human intellect by the 2020s, said the American computer guru Ray Kurzweil.


Machines will rapidly overtake humans in their intellectual abilities and will soon be able to solve some of the most intractable problems of the 21st century, said Dr Kurzweil – one of 18 maverick thinkers chosen to identify the greatest technological challenges facing humanity.


Dr Kurzweil is considered one of the most radical figures in the field of technological prediction. His credentials stem from being a pioneer in various fields of computing, such as optical character recognition and automatic speech recognition by machine.


His address yesterday to the American Association for the Advancement of Science (Aaas) portrayed a future where machine intelligence will far surpass that of the human brain as they learn how to communicate, teach and replicate among themselves.


Central to his thesis is the idea that silicon-based technology follows the “law of accelerating returns”. The computer chip, for example, has doubled in power every two years for the past 50 years, which has led to an ever- accelerating progression in all chip-based technologies.



Technical progress manifold


Dr Kurzweil told the annual meeting of the Aaas in Boston: “The paradigm shift rate is now doubling every decade, so the next half century will see 32 times more technical progress than the previous half century. Computation, communication, biological technologies – for example, DNA sequencing – brain scanning, knowledge of the human brain, and human knowledge in general are all accelerating at an ever-faster pace, generally doubling price-performance, capacity and bandwidth every year.”


Computers have so far been based on two-dimensional chips made from silicon, but there are developments to make three-dimensional chips with improved performances, and even to construct them out of biological molecules that can be miniaturised even more than metal-based computer chips.


“Three-dimensional, molecular computing will provide the hardware for human-level ‘strong artificial intelligence’ by the 2020s. The more important software insights will be gained in part from the reverse engineering of the human brain, a process well under way. Already, two dozen regions of the human brain have been modelled and simulated,” he said.


Although the brain cannot match computers in terms of the straight storage and retrieval of information, it has an unrivalled capacity of associating different strands of information, to look ahead and plan, as well as performing the imaginative creativity that is at the heart of human existence. But Dr Kurzweil is one of several computer scientists who believe that computers are well on the way to creating a “post-human” world where a second, intelligent entity exists alongside people.


“Once non-biological intelligence matches the range and subtlety of human intelligence, it will necessarily soar past it because of the continuing acceleration of information-based technologies,” he said.


“We are understanding disease and ageing processes as information processes, and are gaining the tools to re-programme them. RNA interference, for example, allows us to turn selected genes off, and new forms of gene therapy are enabling us to effectively add new genes. Within two decades, we will be in a position to stop and reverse the progression of disease and ageing resulting in dramatic gains in health and longevity,” said Dr Kurzweil.



Rise of the machines


The history of artificial intelligence goes back to classical times, although of course it was never called by that name. The Greek myths of Hephaestus and Pygmalion incorporate the idea of intelligent machines that take on human form. We would call them robots.


Mary Shelley took up the theme of man trying to create a living image of himself in her story of Frankenstein’s monster, but the word “robot” did not enter the English language until Karel Capek’s 1923 play R.U.R., which stood for Rossum’s Universal Robots.

The idea of a machine being able to match the intelligence of humans was explored in the 1940s by the English mathematician Alan Turing, who devised his test of artificial intelligence.

In a scientific paper published in 1950, Turing came up with a practical solution to the problem – the Turing test. Turing said that a machine would be deemed to have passed the test if human beings could interact with it as they would another person.


The term “artificial intelligence” (AI) was first coined by the computer scientist John McCarthy in 1956, and the concept was explored in the 1950s and 1960s by the likes of Marvin Minksy, of the Massachusetts Institute of Technology.

The science fiction writer Arthur C Clarke drew on the concept of AI in his book 2001: A Space Odyssey, which featured an intelligent computer called HAL that was an intellectual match for man.


On May 11, 1997, the IBM computer Deep Blue became the first machine to beat a reigning world chess champion. This was soon followed by other “intelligent” feats such as the robot car driver, which drove 210km along an unrehearsed desert trail. AI, portrayed in films such as Blade Runner and The Terminator, was on a roll again.