Artificial intelligence (AI) can be trained to detect whether an image of tissue is or not contains a tumor. However, until recently, it remained a mystery as to how he made his verdict. A team from the Ruhr-Universitat Bochum Research Center for Proteomics Diagnostics (PRODI) is working on a new approach that would make AI’s judgment clear and therefore trustworthy.
Researchers led by Professor Axel Mossig describe the approach in the journal Medical Image Analysis.
For the study, bioinformatics scientist Axel Mossig collaborated with Professor Andrea Tanapfel, Head of the Institute of Pathology, oncologist Professor Anke Reincher-Schick of St. The group has developed a neural network, that is, artificial intelligence, that can classify whether a Tissue sample contains tumor or not. To this end, they fed the AI a large number of images of microscopic tissues, some containing tumors, while others were free of tumors.
“Neural networks are initially a black box: it is not clear which one Define features Axel Musig explains that the network learns from training data. Unlike human experts, they lack the ability to explain their decisions. “However, for medical applications in particular, it is important that the AI be able to interpret and therefore be trustworthy,” adds bioinformatics scientist David Schumacher, who was involved in the study.
Artificial intelligence is based on falsifiable hypotheses
Thus, the explainable AI of the Bochum team relies on the only type of purposeful statement known to science: on refutable hypotheses. If the hypothesis is false, this fact must be proven by experiment. Usually, AI follows the principle of inductive reasoning: using tangible feedback, that is, training data, the AI creates a general model on the basis of which it evaluates all further observations.
Philosopher David Hume described the basic problem 250 years ago and it can be easily illustrated: No matter how many white swans we observe, we can never conclude from these data that all swans are white and that there are no black swans at all. So science uses what is called deductive reasoning. In this approach, the general premise is the starting point. For example, the hypothesis that all swans are white is falsified when a black swan is spotted.
The activation map shows where the tumor was detected
“At first glance, inductive AI and the deductive scientific method seem nearly incompatible,” says Stephanie Schörner, a physicist who also contributed to the study. But researchers have found a way. Their new neural network not only provides a classification of whether a tissue sample is tumor- or tumor-free, but also generates an activation map of the microscopic tissue image.
The activation map is based on a refutable hypothesis, that the activation derived from the neural network corresponds exactly to the tumor regions in the sample. Site-specific molecular methods can be used to test this hypothesis.
“Thanks to PRODI’s multidisciplinary structures, we have the best prerequisites for integrating a hypothesis-based approach into the development of a trustworthy AI biomarker in the future, for example to differentiate specific treatment-related tumor subtypes,” concludes Axel Mosig.
This story was published from the news agency feed without modifications to the text. Only the title has changed.