Interview with Ralf Banisch, Senior Data Scientist at mindpeak

Dr. Ralf Banisch

Senior Data Scientist
mindpeak

Can you give us a short introduction to what you are doing at Mindpeak?

I am working as Senior Data Scientist at Mindpeak, and we are very focused on building AI for digital pathology. I build deep learning models day in and day out. We run lots of experiments to get the best-performing deep learning models, and I am responsible for the model development until the production stage, where I then hand it over to the tech team. I also oversee the data annotation part a little bit.

How is Mindpeak supporting pathologists using AI?

Currently, the AI assistants we create are designed to aid pathologists in making optimal decisions.

The pathologist plays a critical role in modern cancer therapies by providing accurate and detailed diagnoses of different types of cancer, enabling personalised treatment plans based on specific cancer characteristics and patient factors.

In order to do this, a biopsy is taken and sent to the wet lab. In the lab, the sample will be sliced into very thin layers and they undergo various chemical staining processes before they will be analysed. The classic way is to use a microscope, but these slides can also be digitised, and the images are then used to diagnose the patient.

Our AI model supports the pathologist by detecting, classifying and counting cells and biomarkers in these images and proposing diagnostic scores. The AI results help the pathologist to assess the slide and make a diagnostic decision: whether the patient is eligible for certain drugs or not.

Besides these clinical applications, we are working on new AI-based biomarkers. AI helps us to find novel visual features in tissue that cannot be seen by humans. These biomarkers will allow to develop new drugs so that more cancer patients can get help and also more effective therapies.

What are challenges of building such models?

One major challenge is the variation in the input data. There are many manual steps in the lab process and every lab does these things a little bit differently. So the images vary hugely, and we have to be robust against that because it's just not an option that the thing suddenly doesn't work when we get a slight stain variation or something similar. This is where most alternative existing models fail.

What are the approaches that you can take to make sure that your product will work no matter what the lab did to the input data?

So, first of all, the simplest thing is careful data collection. We have built several partnerships with different laboratories and collect data from a wide range of data sources. Then, we look at the latest advancements in AI research and adapt them to pathology. For example, we use pathology-specific augmentations tailored to the domain in order to enhance variability even more, we use stain normalisation techniques, and we apply self-supervised and semi-supervised learning techniques.

Can you tell us more about the interaction between your AI assistant and the human expert?

As of now, our systems are diagnostic assistance systems. They are not autonomous diagnostic systems, they do not make a decision on the patient by themselves. 

Personally, I also think that this is the way to get the best accuracy. The pathologist knows all contexts, has all the strengths of the human visual system, and has also all the expert knowledge and can contextualise the information that was provided by the AI. 

Can you comment on the relative strengths and weaknesses of the human visual system and computer vision systems?

It used to be the case that computer vision is really good at picking up texture and also better at nuances. Human vision has been a lot better at picking up shapes, but I think with vision transformers that has changed a little bit. 

Human vision is certainly a bit better on long-range interaction, long-range contexts, and stuff like that, and most importantly, to take context outside of the image into account. For example, when the tissue at one end of a slide informs the decision that the tissue at the other end is likely to be cancerous or when additional patient data, like previous therapies, contains crucial information. All in all, our studies show that for many tasks, we are at par or supersede already human experts, e.g. the distinction between tumour tissue and stroma as well as the classification of single cells.

Previous
Previous

Interview with Emily Jordan, co-founder and former COO of Ancora.ai

Next
Next

Under the hood: 5 practical lessons from developing Large Language Model Applications for Drug Discovery