At the beginning of human civilisation the brain was not recognised as the centre of our intelligence. It was Galen, one of the leading physicians of the Roman empire during the second century, who wrote an essay where he speculated that the brain was the centre of cognition and willed action, one of the first people to recognise its importance.
On its quest to develop intelligent machines, humanity developed many examples of machines that could perform basic numerical calculations, like the Pascaline, a machine created by Blaise Pascal that, using a system of gears and wheels, could add and subtract numbers.
Pascal was a brilliant mathematician and a philosopher, whose conception of human existence was that of an unstable reality in which we live in continuous contradictions, moral, human and even physical, in between the infinitely small and the infinitely large.
Born at the beginning of the XVII century, he lived in a period that had not yet known the enlightment. Pascal, therefore, was still mainly concerned, in his philosophical research, with a profoundly religious vision of our existence. It is a sad irony that, when he died, Pascal was found to have damage to his brain.
Half a century later, another pillar of modern Mathematics, Gottfried Wilhelm Von Leibniz, who together with Newton, Weierstrass, Cantor, Hilbert, Vitali and others helped develop differential calculus, improved on the Pascaline and in 1694 created a machine who could also multiply and divide. Leibniz contributed to metaphysics introducing the idea of a monad, a form of being like a spiritual atom that follows its own internal laws, and reflects the whole universe. Leibniz is also important because he invented the binary system, on which modern computers are based.
Almost two centuries later, in 1890 Herman Hollerith invented the tabulating machine that used punched cards to store data and calculate statistics. In the tabulating machine, punched cards were used to store data that could then be recalled and used to quickly perform statistical analysis. Each card represented one example, the input that could be passed to the machine to derive statistical results. Punched cards have been used with the first computers as well, as a way to input code and information into the machine. For Hollerith’s tabulating machine a punched card represented a “memory” of a sample to be analysed.
In 1943 McCulloch and Pitts presented a model for how a brain’s neuron works, then in 1949 Donald Hebb proposed that the connection between two neurons increases if the two neurons activate simultaneously, and reduces otherwise. Hebb was a psychologist and his simple idea is one at the basis of modern artificial neural nets.
A few years later, in 1957, Frank Rosenblatt invented the perceptron. The perceptron was the first attempt to create an artificial network, though still quite primitive and incomplete. It is loosely based on McCulloch’s and Pitts’s first description of how a neuron works.
More complex neural nets were defined and in 1974 Paul Werbos described the process of back-propagation as a way to make the neural net learn. This opened the way to create neural networks with hidden layers so that they could learn more complex structures. However, feed-forward nets were still lacking an important element regarding intelligence: memory.
Simple feed-forward nets are unable to retain information, since in each feed-forward process all that is learned is lost when restarting the process, but in 1982 John Hopfield proposed the Hopfield net, a recurrent network with a well defined energy that can be used to define states that can be later retrieved, similarly to how memories are recalled.
Soon after, in 1986, Paul Smolensky invented the Harmonium (later renamed Restricted Boltzmann Machine), that has become very popular after 2006 thanks to contributions by Geoffrey Hinton.
What is clear is that science with time has become more and more fascinated with the brain and how it functions, however, maybe one of the most brilliant intuitions of how the brain works, albeit unknowingly to its author, is not due to a scientist, but to a merchant.
In 1801 Joseph Marie Charles invented the Jacquard loom to weave different patterns using cards punched with holes. Charles, nicknamed Jacquard, was not a mathematician or a philosopher, but a merchant interested in automating weaving. The Jacquard loom worked by abstracting weaving patterns into punched cards that could then be recalled to allow the weaving machine to reproduce them at will.
In both cases, the Jacquard loom and the Hollerith tabulating machine, we can think of the punched card as a “memory”, the memory of a pattern for the Jacquard loom, the memory of a sample in the case of the tabulating machine. In both cases the punched card can be thought of as an abstraction, but besides the apparent similarity, they are quite different. For the Jacquard loom the punched cards are representations of features that can be used and recombined together to create different patterns, in the tabulating machines they are just the samples to be analysed to infer deductive results.
Possibly more than the Pascaline and the Leibniz machine or Hollerith’s tabulating machine, the Jacquard loom had the seed of modern deep learning systems, with a punched card used as an abstract representation of reality. All the work of creating the cards was still of course made by humans, but the idea that complicated patterns could be abstractly saved and then recombined together is at the basis of modern deep neural nets and stacked restricted Boltzmann machines.
If we study how the visual system works, we learn that it works by hierarchically understanding reality through the visual cortex. The image, after having formed on the retina, goes through several layers of processing, first through the V1 area (the striate cortex) that extract basic features such as lines and movement orientation, then through the V2 area that has the ability to interpret colours and maintain their constancy under different lighting, to the V3 and V4 regions which improve colour and form perception, then finally to the Inferior Temporal cortex (IT) that is involved in faces and objects recognition.
Our brain then seemingly works by making sense of reality by creating simple abstract representations at different levels that it can then recombine together. Our memories are processed samples of these abstractions that we can recall and re-use. In this regard, one simple weaving loom created in 1801 may be more “intelligent” than some of the latest computers.