Chinese researchers claim to have developed an AI that can tell if patients have clogged arteries through their selfies. But can it really detect coronary artery disease in selfies? Things aren’t that simple – let’s take a closer look.
The study
Chinese researchers took 4 photos of each of the almost 6000 patients participating. Thus, it wasn’t really a selfie but a pretty thorough depiction of patients’ entire heads. After snapping some pics and getting a complete history, all patients underwent angiography. This test pretty definitively told researchers which patients had coronary artery disease (CAD, better known as clogged arteries).
Researchers then fed an AI algorithm with the pictures of each patient and the result of their test. In the end, they came up with a convolutional neural network that achieved 71% sensitivity and 72% specificity in detecting non-trivial CAD (defined as at least one vessel with >50% stenosis). This is not half bad for a selfie-based diagnostic method. The algorithm quickly made headlines. AI-philes rejoiced at this display of power of our robot overlords. On the other hand, skeptics quickly thought up ways this could be used to further limit our freedom by both the private and the public sector. Actuaries worldwide had wet dreams for days. But, all of them may be cheering too soon; this AI algorithm is not as clever as the paper makes it out to be.
So I can’t upload my picture and get tested for CAD?
No, no you can’t. At least not reliably. The following graph may prove enlightening:
This is what the algorithm can do under ideal circumstances – where 50% of patients have at least one artery that is clogged by more than 50% (sort of a 50 over 50 but this one is lethal). Under this best-case scenario, the AI will correctly detect coronary artery disease in selfies 68% of the time.
Real-life proves more challenging. Finding the real prevalence of CAD in the population is tricky. To find out how many people out there have a >50% stenosis is even harder. Based on the most recent research on the subject, CAD is present in 24% of people aged 20-59. How many of those have >50% stenosis in at least one artery? That is an even harder question to answer. There’s a simple way to “guesstimate” things though: 77.4% of the study’s participants had any kind of CAD. That is almost four times the frequency of the general population. If we use this ratio, we can assume that a 24/3.2=7.5% of people aged 20-59 have >50% stenosis.
Letting the algorithm loose in this real-life population gives us this chart:
This is… not good. According to the calculations, the chance of actually having significant CAD after being told so by the algorithm is 13.8%. On the other upside, a negative result is correct 97% of the time.
Without good data, AI isn’t smart
Like all computer processes, AI algorithms abide by the rule: Garbage In – Garbage Out. If you feed an AI with data that are only marginally related to the desired outcome, it will never become a perfect diagnostic modality. Think about it – does a face contain data diagnostic of heart disease? Sometimes, but usually not.
In reality, this poor AI is clutching at straws. It’s not hard for it to realize that man are at higher risk of CAD. So, all signs of manhood become signs of CAD in its eyes. Receding hairline? CAD! Also, older people are more likely to have CAD. So… wrinkes? CAD! Age spots? CAD! If not supervised properly, this quickly becomes more modern-day palm reading than science. And that’s without even mentioning that 95% of patients were Han Chinese and 75% male, severely limiting its predictive capacity in other groups.
AI is a potent, but not an omni-potent tool. We’ll do a lot better when we stop pretending otherwise.