Another week, another AI. This time, a South Korean team of researchers trained an AI to detect heart failure. The only thing it needs – ECGs.
The AI
This deep learning AI was developed using many hidden layers of neurons to learn complex patterns from the simple data it received. It trained with 32.671 ECGs of more than 20.000 patients, which is a jaw-dropping amount of data. No pre-processing was applied – all the AI got to work with were the actual ECG signals. The research team used Python and TensorFlow to develop the model. This is not surprising, as Python offers powerful tools both for medical signal processing as well as deep learning development.
The results
As with most AIs with publications, the results of this particular model are impressive. The model yielded an Area Under the Curve (AUC) of 0.866. Values above 0.85 reflect a very potent predictive model. The authors also tried to understand the AI’s “thought” process. Their findings get very technical at this point. The skinny is that by learning what features the AI finds indicative of heart failure can help doctors interpret ECGs better on their own.
Too many models, not enough progress
At this stage, more and more papers find their way to publication describing awesome AI models that can diagnose even the most obscure maladies. The sad thing is that so far there has been no attempt to give access to the medical community. These AIs are only used for research purposes; even then, no large prospective study that examines their use in real-life circumstances has been completed. There could be a reason for this: Access to the tools needed to create an AI is simple – anyone with data and a powerful PC can whip up an AI in a matter of weeks. Access to the resources needed to design and execute a clinical trial? Now, that’s a whole different level. Here at Fantastrial we believe that very few of these AIs that make the papers will actually get tested and launched for the general public. COVID-19 will only serve to make things worse concerning clinical trials. This is not a positive development, as this Wild West of AIs in medicine could create a double calamity: unsafe and untested AIs entering clinical use by risk-taking physicians and at the same time slow adoption of AI from the rest due to (somewhat grounded) safety concerns. Regulatory organizations need to pick up the slack.