Dermatology Times

Kaitlyn Bader, Senior Editor

Joseph Zabinski, PhD, MEM

This Rare Disease Day, Joseph Zabinski, PhD, MEM, of OM1, discusses the role of artificial intelligence in helping to detect rare dermatologic diseases.

Rare Disease Day has been observed globally every year on the last day of February since its creation in 2008. This year, Rare Disease Day is observed on February 29, and it is an opportunity to work toward increased awareness for patients with rare diseases.

Artificial intelligence (AI) is quickly emerging as a new tool to study, analyze, and detect rare diseases and their characteristics.

Joseph Zabinski, PhD, MEM, the managing director of AI and personalized medicine at OM1, a real-world data, AI, and technology company with a focus on chronic diseases, provides insights into AI’s capabilities in detecting rare diseases, as well as the common concern of with so many AI platforms available: How can clinicians trust AI’s data and learn from its insights?

“Just because you have an AI model that does well in one rare disease does not mean that the same thing will work well in another rare disease. That is why our whole digital phenotyping approach is designed to go one level deeper than just point-solution modeling. It is designed to say, ‘We can reflect what is true about the patient and their data characteristics.’ Then we can use that greater flexibility to understand different rare diseases through pattern matching,” said Zabinski.

Transcript

Dermatology Times: Can you please explain the challenge of not being able to find cohort data on a specific condition when trying to improve diagnosis and treatment rates?

Zabinski: There are a few issues that come up when we are trying to understand rare dermatologic diseases from the data access perspective and the patient journey perspective. The first is that patients with rare diseases tend to languish for long periods of time, sometimes going years without a diagnosis or having been misdiagnosed in another disease category. In both cases, this means we cannot access those patients’ data as far back in the record as we would like to so we can see where they began their disease states until they reach an accurate diagnosis. However, this is an area where having AI tools to help us look for undiagnosed or misdiagnosed patients can be helpful. Another point with respect to data accessibility is that in a lot of conditions that are more common, there are sometimes rare subtypes of the disease that may be hiding in large populations and in large datasets. Unless we have good tools that can pull out and isolate the patients who belong in those rare subtypes, they can get lost in the noise. It is not something you worry about so much with larger disease states where population average characteristics may be more important, but certainly, in the rare disease instance, it is quite important.

Dermatology Times: What is OM1’s Patient Finder?

Zabinski: OM1’s Patient Finder is quite an exciting tool. It does what it says: find patients. It is built using our digital phenotyping AI technology, PhenOM. We can use PhenOM to understand patterning and signaling information in patients’ data histories, just like we might understand a patient’s genetic code by using genotyping, for example. If we can lay out the information in those patients’ histories and then use this digital phenotyping technology, we can ask, “What are some of the data elements that are unique for this patient and perhaps associated with a characteristic or an outcome of interest that that patient also has in their record?”

In the case of rare disease, we might collect a group of hundreds or thousands of patients with a rare disease and ask, “Using this technology, what data characteristics were present in this patient’s past before they reached a point of diagnosis that may have signaled that that’s where they were headed?” In other words, could we detect early warning signals? Patient Finder compresses those sets of signaling information into digital phenotypes; I think of them as fingerprints. Once we have the fingerprint for the disease, we can then compare new patients’ records to it.

This is where Patient Finder is powerful in deployment. We can use it to look at health system data and we can use it to look at other novel datasets, but once it understands that reference fingerprint, we have the ability to call out patients who are highly likely to match that fingerprint, meaning they are highly likely to have the condition that we’re looking for, even if they haven’t been diagnosed yet. By being able to call them out we can study them further, or, in clinical implementation, patients can be contacted, and if they consent, can proceed with diagnostic evaluation and potentially receive a diagnosis.

Dermatology Times: How has OM1’s Patient Finder been able to improve data collection of undiagnosed conditions?

Zabinski: Patient Finder works hand in hand with some of the other efforts that we work on at OM1 to gather data, improve the quality of data, and, ultimately, give us insight into where patients with rare diseases are, how they might more quickly be diagnosed, and then, if appropriate, be offered treatment for their condition. The first of those ways is to use Patient Finder’s ability to look at a dataset and ask, “Who are and how big is the population of patients hiding below the surface of this dataset who may have the condition that we are interested in finding?” It is often the case, at least if we are talking about a rare disease with dedicated diagnostic coding, that we can find some patients in a dataset. However, we also have a hypothesis that there are others, as I describe them, beneath the surface who have similar clinical characteristics in their background, but who are not yet labeled with a code that lets us easily filter the dataset to find them.

Patient Finder can look at a dataset and say, “In addition to the few thousand people that you found, here are another 500 who are highly likely to also belong in that population.” That capability can do useful things such as giving us a better sense of true disease prevalence. Sometimes this is useful in a clinical trial context. Recruitment is a significant challenge in trials, and the ability to point to places where patients who may have been missed are concentrated can be helpful for recruitment efforts and figuring out how to optimize site selection.

The other way that Patient Finder helps with understanding and improving data gathering and processing for rare diseases is in the context of more proactive data gathering. This is another concept we spend a lot of time on at OM1: constructing digital registries, which can be interesting in the rare disease context. Patient Finder can help us by saying, “As the patient moves along in their journey, it is possible that early on in their journey, they have some indications that they may be on the path to a certain outcome, a certain progression, or a certain diagnosis.” We may not be sure yet, but Patient Finder can give us a sense as they progress of how their likelihood is changing. Is their path converging toward something of interest to us with respect to the clinical outcome? Or is it bending away from that? The ability of Patient Finder to say, “At this moment in time, for this data stream I am looking at, these patients look similar to the target group that I care about” is quite a powerful application in enhancing data availability for patients with rare diseases.

Dermatology Times: With Rare Disease Day in mind, what is the role of AI in improving the lives and outcomes of patients with rare diseases such as GPP?

Zabinski: There are a couple of things that I have learned in my career by working with people impacted by rare diseases and trying to understand them better to develop treatments. First is that each rare disease is unique. Of course, there are relationships among various rare diseases, but with somewhere close to 7000 rare diseases, they remain remarkably different from one another. This is true clinically and is also just as true at the data level when we are trying to understand what is going on with patients in the data. It is true with AI as well; just because you have a model that does well in one rare disease does not mean that the same thing will work well in another rare disease. That is why our whole digital phenotyping approach is designed to go one level deeper than just point-solution modeling. It is designed to say, “We can reflect what is true about the patient and their data characteristics.” Then we can use that greater flexibility to understand different rare diseases through pattern matching, again just like we might first understand a patient’s genetics, and then look for specific disease-associated mutations. I do think that both AI and data can be quite helpful with better understanding rare disease patients’ trajectories and ultimately getting them better treatment if we can be more precise in distinguishing them from everyone else.

This is one of my personal areas of focus, this notion that rare disease patients get lost so often in larger disease populations. They may have certain symptoms of a rare disease and go to see their primary care provider, have some testing done, and eventually receive negative results, so they go back to their primary care provider, or they go from specialist to specialist without answers. That kind of pinballing around can take years from patients’ lives. If we can reach a nice visibility into what a patient journey looks like and use AI to point out patients who are somewhere along that journey, we can be much more impactful. We are being much more impactful in clearing up some of that long-term dragging out of the process and getting patients at least to the point of being treated by a specialist and having access to available treatments much more quickly.

Dermatology Times: AI is a constant buzzword in medicine and in media. How can clinicians trust OM1’s Patient Finder and other tools as another AI platform?

Zabinski: AI is certainly a buzzword, and it’s not always positive. If we read the popular press headlines these days, most are about problems with AI. They are about privacy violations, instances where AI has made incorrect predictions, and those have resulted in bad real-world consequences. This is one reason we are much more conservative in health care with respect to using AI than in some other industries. That said, there is a real path toward trust with AI tools. If we can establish that trust, AI tools can accomplish impressive improvements all over the world as it is today.

The first thing I focus on is contextualizing what AI can and cannot do. In a clinical treatment context, AI is a helper tool. It is not a decision maker that replaces clinicians. It is something that should be able to provide some additional insight and additional personalized vision for providers at the point of care to the patient sitting in front of them. AI can say things such as, “This patient looks like they are at an elevated risk of having XYZ condition, or they may be more likely to respond to this treatment or more likely to have a negative reaction.” You may want to consider those findings in your conversation with the patient. That is the appropriate application for AI tools today.

The other consideration is, and this is something we have built into Patient Finder and the underlying PhenOM technology, the ability to explain why AI says what it does. Many people have experimented with ChatGPT and other large language model tools. It is amazing the kinds of answers these tools generate, but it is mysterious to think about how they produce these answers. And I do not think it is obvious, even in many cases for experts, how the model came up with the output it produced in any specific instance. We focus on explainability for our tools like Patient Finder, by which we mean the ability to sit down with clinical experts who know the disease but do not know the AI part and say, “These are the factors that pointed the model in this direction.”

It is not the same thing as saying there is a smoking gun, that if the patient had a twitching left eyelid and a rash on their thumb, then they have this very specific condition. It is not that simple or straightforward. That is another myth that gets caught up in the AI hype. However, when you can have a conversation and say, “This is what this patient’s phenotypic profile looks like. This is how it is different from others, and that is why the model pulled this patient out of the broader pool,” I found that to be quite an effective way of building trust with clinicians helping them see how AI can integrate into their practice.

Dermatology Times: Do you have any concluding thoughts?

Zabinski: I keep hearing that 2024 is the year that AI grows up. We will see if that is true; we can check in again in January 2025. But I think and I hope that there is an opportunity now, perhaps through the popularization of AI, for it to have a mature impact in the coming months and years, especially in areas that are often neglected, such as rare diseases.

[Transcript lightly edited for space and clarity.]