Artificial Intelligence and Image-Based Diagnostic Tools
We have recently been hearing much about “machine learning” and “artificial intelligence” (A.I.). While many tend to use the two terms almost interchangeably, there is a distinct difference between the two.
Machine learning, also known as “deep learning,” is a field of applied mathematics that uses algorithms and statistical models to build predictive systems capable of answering well-formulated questions that can’t easily be answered in other ways. A.I., in contrast, is the use of machines to perform or improve on what people otherwise already do well. Both require very large amounts of high quality data to be effective, and machine learning is frequently a part of what enables A.I.
While the field of A.I. has been subject to much hype, there have been some significant advances in using this approach within healthcare and the life sciences. One area in which A.I. excels is pattern recognition.
A Boston-based start-up company called FDNA has developed a suite of A.I.-based smartphone apps called Face2Gene that help diagnose rare congenital and neurodevelopmental disorders by scanning and classifying facial features in photos of people. The company makes Face2Gene available without charge to healthcare professionals who can use it to get a second opinion when diagnosing rarely seen genetic disorders or to provide a diagnostic starting point when a physician is not familiar with a patient’s symptoms.
The A.I. system detects subtle patterns in the pictured faces and homes in on possible diagnoses, providing the physician with the 10 most likely options. The researchers at FDNA originally trained the system to distinguish between three different neurodevelopmental conditions with characteristic features and then fed the system more than 17,000 images of diagnosed cases spanning 216 distinct syndromes. When the app was presented with new images, it provided the correct diagnosis 65% of the time. When considering multiple predictions, Face2Gene’s top-10 list included the correct diagnosis 90% of the time. In three separate studies, Face2Gene has already outperformed clinicians in correctly diagnosing patients. With over 150,000 images in its database today, the app’s diagnostic accuracy has further improved. Face2Gene is also working to alleviate biases due to a relative lack of ethnic and geographic diversity in the photos within its database, which also influences the app’s diagnostic accuracy.
Use of A.I.-based pattern recognition for diagnostic purposes has also been explored in the area of oncology. Two groups – one from the National Cancer Institute and Intellectual Ventures Global Good Fund (funded by Bill Gates) and another from the UK’s Institute of Cancer Research – recently published research detailing image-based A.I. tools aimed at improving the diagnosis and treatment of gynecological cancers. The U.S. team’s tool improves the screening accuracy of precancerous cervical lesions by identifying problematic lesions more accurately than five other tests, including widely used Pap and PCR-based HPV tests. The U.K. team’s tool can recognize aggressive forms of ovarian cancer based on cell shape and the shape and distribution of cancer cell nuclei, thus providing information that could inform better treatment selection.