The singularity that will never happen

on

Machine learning, deep learning and artificial intelligence are all terms that have gained quite a lot of popularity in recent times. More so after many scientists and researchers including, among many others, even notable Prof. Stephen Hawking, arguably the most famous contemporary theoretical physicist, warned us on the future dangers posed by the progress of research on artificial intelligence. Hawking, in a BBC interview, went as far as saying that “The development of full artificial intelligence could spell the end of the human race”. In fact, during a UN meeting held on 14 October 2015, experts warned about the dangers posed by artificial intelligence and, in an open letter, research priorities were drafted including the need for machine ethics or to make lethal autonomous weapons comply with humanitarian law.

The use of the term “singularity” has gained popularity after it was used by the Polish mathematician Stanislaw Ulam in his eulogy of John Von Neumann and it described “the ever accelerating progress of technology … which gives the appearance of approaching some essential singularity … beyond which human affairs, as we know them, could not continue”. Another mathematician and computer scientist, Vernor Vinge, wrote a paper in 1993 in which he argues that there will be a point when intelligent machines will be able to create yet more intelligent ones in “an exponential runaway beyond any hope of control” which is called the singularity.

Created during the 70’s, neural networks have found a resurgence in popularity and applications thanks to much progress made in recent years in their theoretical framework and thanks to the advancement of hardware speed and use of graphical processing units (GPUs). Neural nets try to mimic the functioning of human brain neural networks (hence their name) using functions that assign values (often just probabilities for the neuron firing) to the artificial neurons comprising the network. These neurons are combined either in a feed-forward (i.e. where the information only moves forward) or recurrent network (where the information can also flow back, and the neurons are connected by a two-way link) and, regardless of their structure, such connections are marked by numerical values that define their strength. Neurons that have strong positive connections tend to fire together, while neurons that have negative connections do not. Small subsets of those neural nets define features of the data that can be combined together to find rules and make predictions on the nature of the data. Regardless of how they operate, artificial neural networks have been surprisingly efficient in many activities that just up to a few years ago were quite elusive for computers to emulate. In particular, neural nets have found many applications in the realm of unsupervised learning where there is no need for a human to previously label the data or manually extract the features.

For example, a Facebook application named DeepFace, which aims at recognising faces and determine whether two photos, irrespective of their angle or lighting, contain an image of the same person, has reached an accuracy that can match or surpass that of a human person. This is, put quite simply, a real software feat that was unimaginable just a few years ago. More importantly, all that the software needs is just a vast amount of data and no effort from human technicians to previously label that data for training purposes. Since, in the era of big data, large amounts of data are cheap and human labour can be costly, such progress and applications tend to be the object of much research by some of the largest and lucrative companies in the world, such as Google, Facebook, Microsoft, Baidu, Netflix, and so on.

Not everyone, however, seems to be so alarmingly worried. Tim Dettmers, a deep learning student and kaggle competitor, in an interesting post compares the complexity of a biological human brain to that of an artificial neural net, concluding that currently, despite all the impressive recent advancements, the best neural net has much less that 1% of a human brain capability. This shouldn’t come as a surprise, given that much of the neural nets’ performance comes in still quite focussed tasks. Gary Marcus, professor of psychology at New York University and co-founder of the machine learning company Geometric Intelligence, argues that current designs can only provide A.I. systems with limited flexibility and that much of the progress in the last few years, while remarkable, is probably less important than we would like to think. In fact, machines are already miles ahead of us in some aspects (like fast computations and arithmetics) but much behind us in yet more. Intelligence is a multi-dimensional variable, and as such we cannot define a precise moment when machines will become more intelligent than we are. The idea of a singularity intended as a precise date in time does not necessarily make much sense, he argues.

Certainly, despite all the improvements in performance and flexibility, there still much work to do in artificial neural networks. Most researchers would agree that the time when machines might be able to work on a variety of complex problems as humans do is still many years ahead, with “many” varying in between 10 to 70 or more years. In fact, it can be argued that our model of a neural net is still quite too primitive to try to compete with a biological brain, regardless of speed or hardware limitations.

If we think about the development of linguistics we may perceive similarities with what goes on in modern machine learning. Before the 70’s language learning and comprehension was mostly seen as a reinforcement and imitative process through trial and error. Chomsky argued that language is naturally wired in our brain and that, in fact, children, are hard-wired for the languages’ grammar structure we use. Similarly, there may be a “code” in our brain that makes us learn the way we do, or, like this news article describes, similar to our very own genetic code that directs every single cell in our organism to do what it does without fail. Without cracking this code, or imparting general “behavioural” rules to our neural nets, it may be impossible to narrow the divide in performance between artificial and biological neural networks.

Biology has had millions of years to develop a model that works in our Earth environment which, even through several glaciations and catastrophic events, has remained almost unchanged. Yet, when we compare a human brain to that of an ape, which are divided by a relatively short time in terms of evolution, there may not be much difference given how slowly evolution works. Still, one may be puzzled by the difference in cognitive ability that seem to distinguish us from dolphins or chimpanzees. Until we realise that the really unique factor distinguishing us from other animals is simply a much more evolved language. Without language our species, no matter how intelligent, would arguably still live in caves, with no electricity, heating, or carry-out food. Oral language, but most importantly, written language, which is truly completely unique to our species, allow us to build our progress based on the contribution of our most intelligent members, not our average ones. A simple distribution curve may allow for only very few “super-smart” people, but that is all we need, as they can provide, through their research and written communication, the results of their insights to all and each of us. While I may never have come up with the theory of relativity, I can read and comprehend Einstein’s work, also because before him I had the chance to learn and assimilate the work of Pythagoras, Galileo, Newton, Minkowski, etc. Without language, one “super-smart” person’s insights would be lost forever and never be assimilated by the rest of us. Anything that cannot be explained and passed along through simple gestures or a primitive language could not be shared. Communication and a mature and complete language was necessary for our species evolution.

If we accept that the real discriminant between us and other species in the animal kingdom is language alone, and we, furthermore, accept that language has been hard-wired in our brain, we start to realise how a flexible but structured brain can be more intelligent than loose gray matter that needs molding.

There can be so large differences between brains of different people that cannot be explained away by a difference in structure, weight or number of neurons and connections alone. Language, which may be the strongest differentiating element of our species from the rest of the animals, may help us explain not only our scientific and technological progress, but also our species greater cognitive abilities. As Chomsky theorised, if language is the result of a hard-wired written code in our brain, it may also point at other different hard-wired rules written in our brain that help us in manipulating our information and knowledge through symbolic manipulation. As such, until we understand this underlying structure, machines may never be able to reach our level of understanding and intelligence which may be farther away in the future than we think. It is this subtle, but real, underlying structure that may explain the differences in cognitive abilities. An underlying structure that may not, in fact, allow for infinite refinement, or the realisation of the so called “singularity”.

Moreover, another important aspect that we often neglect to consider is the progress of neuroscience. While cochlear implants have been used for years, we are now able to also create artificial neurons that mimic human cells. Artificial chip implants that can interact with a biological brain are not so far-fetched fantasies anymore, and rather than having machines supplant humans, we can expect a closer interaction between humans and artificial intelligence machines.

The 5 to 7 year shift is a term used to define the period during which children stop thinking uni-dimensionally. Before that age, children’s cognition is unidimensional. In fact, dimensionality must be a real complex element if children take that long to stop thinking uni-dimensionally. As Dr. Marcus notes, intelligence, though, is a multi-dimensional variable. We should then stop thinking of humans and machines as walking on an single dimensional linear path, but rather moving, and crossing and also interacting and merging through neuroscience, on a multidimensional hyper-surface on which there won’t be any singularity.

 

Leave a Reply

Your email address will not be published. Required fields are marked *