AI beyond pattern recognition #1: Interview with Christoph von der Malsburg

Today, the world seems enamoured  by the possibilities of Artificial Intelligence. As of now, this largely amounts to powerful, yet predictable and widely understood algorithms for search and pattern recognition, that inspired the era of Data Science and Machine Learning. While the effects of these techniques are undoubtedly great and in many use cases their potential has yet to be unlocked, their capabilities are far from human cognition: machines that truly learn continuously, represent knowledge and reason with it, “think” creatively and strategically are yet to be invented. At the frontiers of scientific research, pioneers are pushing  beyond the pattern recognition paradigm, towards real intelligence. In this new series of blogposts, we talk to them about their visions and the research they’re undertaking to get there.

For our first interview, Paul von Bünau spoke to Christoph von der Malsburg, Senior Fellow at Frankfurt Institute for Advanced Studies and member of the scientific advisory board of the Human Brain Project. Malsburg’s focus is on organization and function of the brain.

Paul von Bünau: You have developed a group of methods (dynamic link architecture) based on principles other than pattern recognition processes such as deep learning, regression, or support vector machines. What is the basic idea behind it?

Christoph von der Malsburg: Whereas in all known neural systems the representation of active content is based on the activity of neurons, the dynamic link architecture uses as representation of cognitive content structured networks that activate and de-activate on the functional time scale.

PB: Is your scientific work focused more closely on the development of novel algorithms for practical use or rather on better understanding of how the brain works?

CM: My main interest lies in understanding brain function. As my methodology is based on simulating typical brain processes in the computer, I am in effect pursuing both goals at the same time.

PB: What is the advantage of your approach? For example, you write that your algorithms require less training data (parameter estimation). Statistically, this is actually only conceivable if you replace calibration data with accurate assumptions (prior).

CM: The prior that I am advocating does not concern particular content but rather the structure of admissible network patterns. This prior takes the form of the well-known laws of network self-organization.  Learning from input statistics can be considered as a process of search in network structure space. Network self-organization reduces the volume of this search space by many orders of magnitude, thus reducing the amount of required data by an equal factor.

PB: One possible type of prior includes heuristics, such as the assumption that the world is designed in such a way that visual input only changes gradually over time (for example, gradual translation as a vehicle passes). In one of your papers, you present a model of how dynamic link architecture can leverage such input to learn self-organising translation and rotation-invariant mapping. How far, would you say, does this scheme correlate to the way the human brain learns and works, and to what extent is it simplistic or perhaps even not analogous?

CM: The key principle of perception and understanding the world in the brain is to represent the infinite variety of sensory patterns with the help of a finite model box of structural fragments. Computer graphics demonstrates the power of this approach, creating infinite varieties of realistic-looking scenes on a very compact structural base.  To represent moving images of objects, computer graphics separately represents the intrinsic structure of objects on the one hand and the laws of projection into image space on the other, the latter being common to all objects.  Also the brain demonstrably uses this approach, being able recognize an object in all transformed versions after having seen it only once.  The dynamic link architecture models this by being able to learn and self-organize the image transformation laws once and for all.

PB: How universal is the aforementioned scheme (self-organised learning through assumptions about the nature of the input)? Only as a model for image recognition or also for other forms of information processing in the brain?

CM: I would rather say that learning could not work for a visual system in isolation.  Learning rather requires several independent senses as well as the ability of move and act.  The stringent criterion for the brain to accept representations as reliable knowledge about the environment is the ability of these representations to explain the sensory patterns in the different senses as well as their dependence on own motor acts. The dynamic link architecture is to be seen as a basis for this brain-spanning process.

PB: Most of the time, machine learning procedures strictly differentiate between training phase (model calibration) and application (prediction); continuous learning usually does not take place. Does your approach include a natural concept of continuous learning?

CM: My approach is based on nothing else but continuous learning. The problem with continuous learning, the stability-plasticity dilemma, is minimised by the fact that inputs are first analysed (through segmentation and classification) to narrowly restrict plasticity in the system to the place that is relevant to the current input.

PB: In which commercially relevant application areas do you think your approach can best exploit its strengths, especially when compared to conventional pattern recognition algorithms?

CM: Like our brain, the approach is best suited for dealing with natural environments as needed for self-driving cars and household robots.

PB: You are not only a scientist, but also an entrepreneur with your own product: mindfire.ai. What is your goal with this venture? What have you achieved so far?

CM: In my previous companies, ZN Vision in Bochum and Eyematic Inc. in Los Angeles, I focused on facial recognition. ZN Vision went through several mergers and sales and continues to exist as Idemia AG, while Eyematic Inc. merged into Google and formed the core of the projects Google goggles and Google glass. Mindfire is a Swiss foundation that is currently in the development and financing phase. My goal with these commercial ventures is to prove the functional capability of the system, thereby triggering a paradigm shift in the scientific world and open the door to a completely new field of technology.

Dr. von der Malsburg did his PhD in elementary particle physics at CERN, Geneva, and in Heidelberg.  He worked as research scientist in a Max-Planck-Institute for Neuroscience in Göttingen, as Professor of Computer Science, Neuroscience, Physics and Psychology at USC in Los Angeles, co-founded the Institute for Neural Computation at Ruhr-University Bochum and is now Senior Fellow at the Frankfurt Institute for Advanced Studies. In the neurosciences he is known for his theories of network self-organization in the growing brain and for his Dynamic Link Architecture, which puts cognition  on a neural basis.  He co-founded two successful companies based on his theory and is the recipient of a number of national and international awards.Prof. Dr. Christoph von der Malsburg, born May 8th 1942, in Kassel, has studied physics in Göttingen, Munich and Heidelberg, and obtained a PhD there with work on elementary particle physics done at CERN, Geneva. After a longer period at the Division of Neurobiology of the Max-Planck-Inst. for Biophysical Chemistry in Göttingen he became Professor of Computer Science and Neuroscience at the University of Southern California in Los Angeles and, sharing his time, of Systems Biophysics at the Institute for Neural Computing of Ruhr-University, Bochum. His focus is organization and function of the brain. He proposes self-organized neural nets as neural code for the interpretation of mental processes.

Share