Researchers have developed an Artificial Intelligence (AI)-based system to predict the risk of early deaths due to chronic disease in middle-aged adults.
The study, published by PLOS ONE journal, found that the new AI Machine Learning models known as “random forest” and “deep learning” were very accurate in its predictions and performed better than the current standard approach to prediction developed by human experts.
Such new risk prediction models take into account demographic, biometric, clinical and lifestyle factors for each individual, and assess even their dietary consumption of fruit, vegetables and meat per day, said Stephen Weng, Assistant Professor at the University of Nottingham in Britain.
The traditionally-used “Cox regression” prediction model, based on age and gender, was found to be the least accurate at predicting mortality and also a multivariate Cox model which worked better but tended to over-predict risk.
“Preventative healthcare is a growing priority in the fight against serious diseases so we have been working for a number of years to improve the accuracy of computerised health risk assessment in the general population,” said Weng.
For the study, the team included over half a million people aged between 40 and 69.
Although these techniques could be new to many in health research and difficult to follow, clearly reporting these methods in a transparent way could help with scientific verification and future development of AI for health care, said Joe Kai, Professor at the varsity. (IANS)
In a ray of hope for those who have to go for breast cancer screening and even for healthy women who get false alarms during digital mammography, an Artificial Intelligence (AI)-based Google model has left radiologists behind in spotting breast cancer by just scanning the X-ray results.
Reading mammograms is a difficult task, even for experts, and can often result in both false positives and false negatives.
In turn, these inaccuracies can lead to delays in detection and treatment, unnecessary stress for patients and a higher workload for radiologists who are already in short supply, Google said in a blog post on Wednesday.
Google’s AI model spotted breast cancer in de-identified screening mammograms (where identifiable information has been removed) with greater accuracy, fewer false positives and fewer false negatives than experts.
“This sets the stage for future applications where the model could potentially support radiologists performing breast cancer screenings,” said Shravya Shetty, Technical Lead, Google Health.
Digital mammography or X-ray imaging of the breast, is the most common method to screen for breast cancer, with over 42 million exams performed each year in the US and the UK combined.
“But despite the wide usage of digital mammography, spotting and diagnosing breast cancer early remains a challenge,” said Daniel Tse, Product Manager, Google Health.
Together with colleagues at DeepMind, Cancer Research UK Imperial Centre, Northwestern University and Royal Surrey County Hospital, Google set out to see if AI could support radiologists to spot the signs of breast cancer more accurately.
The findings, published in the journal Nature, showed that AI could improve the detection of breast cancer.
Google AI model was trained and tuned on a representative data set comprised of de-identified mammograms from more than 76,000 women in the UK and more than 15,000 women in the US, to see if it could learn to spot signs of breast cancer in the scans.
The model was then evaluated on a separate de-identified data set of more than 25,000 women in the UK and over 3,000 women in the US.
“In this evaluation, our system produced a 5.7 per cent reduction of false positives in the US, and a 1.2 per cent reduction in the UK. It produced a 9.4 per cent reduction in false negatives in the US, and a 2.7 per cent reduction in the UK,” informed Google.
The researchers then trained the AI model only on the data from the women in the UK and then evaluated it on the data set from women in the US.
In this separate experiment, there was a 3.5 per cent reduction in false positives and an 8.1 per cent reduction in false negatives, “showing the model’s potential to generalize to new clinical settings while still performing at a higher level than experts”.
Notably, when making its decisions, the model received less information than human experts did.
The human experts (in line with routine practice) had access to patient histories and prior mammograms, while the model only processed the most recent anonymized mammogram with no extra information.