US chipmaker Intel Corp. has said that it will focus on “edge computing” that could hold the key to the success of artificial intelligence (AI) in the future.
Edge computing refers to the practice of storing data on computers located near cell towers and other network equipment to improve network response times. It is different from today’s Cloud-based system, where information is sent to a distant data centre, Yonhap news agency reported on Wednesday.
“Forty-three per cent of AI tasks will be handled by edge computing in 2023,” Kwon Myung-sook, CEO of Intel Korea, said in a statement during a forum in Seoul.
“AI devices empowered with edge function will jump 15-fold.”
The expansion of computing at the edge is an important growth opportunity for the chip giant — an estimated $65 billion market by 2023, Intel said.
More AI is being incorporated into edge devices, from Internet of Things (IoT) devices to smartphones, as AI algorithms improve, according to the company.
“Innovation in edge computing has become necessary where data is most produced,” Kwon said. “It is why Intel is preparing a platform solution that can cover both hardware and software.”
Intel said AI will support and provide new services in eight key industries, including smart cities, robots and gaming.
In a ray of hope for those who have to go for breast cancer screening and even for healthy women who get false alarms during digital mammography, an Artificial Intelligence (AI)-based Google model has left radiologists behind in spotting breast cancer by just scanning the X-ray results.
Reading mammograms is a difficult task, even for experts, and can often result in both false positives and false negatives.
In turn, these inaccuracies can lead to delays in detection and treatment, unnecessary stress for patients and a higher workload for radiologists who are already in short supply, Google said in a blog post on Wednesday.
Google’s AI model spotted breast cancer in de-identified screening mammograms (where identifiable information has been removed) with greater accuracy, fewer false positives and fewer false negatives than experts.
“This sets the stage for future applications where the model could potentially support radiologists performing breast cancer screenings,” said Shravya Shetty, Technical Lead, Google Health.
Digital mammography or X-ray imaging of the breast, is the most common method to screen for breast cancer, with over 42 million exams performed each year in the US and the UK combined.
“But despite the wide usage of digital mammography, spotting and diagnosing breast cancer early remains a challenge,” said Daniel Tse, Product Manager, Google Health.
Together with colleagues at DeepMind, Cancer Research UK Imperial Centre, Northwestern University and Royal Surrey County Hospital, Google set out to see if AI could support radiologists to spot the signs of breast cancer more accurately.
The findings, published in the journal Nature, showed that AI could improve the detection of breast cancer.
Google AI model was trained and tuned on a representative data set comprised of de-identified mammograms from more than 76,000 women in the UK and more than 15,000 women in the US, to see if it could learn to spot signs of breast cancer in the scans.
The model was then evaluated on a separate de-identified data set of more than 25,000 women in the UK and over 3,000 women in the US.
“In this evaluation, our system produced a 5.7 per cent reduction of false positives in the US, and a 1.2 per cent reduction in the UK. It produced a 9.4 per cent reduction in false negatives in the US, and a 2.7 per cent reduction in the UK,” informed Google.
The researchers then trained the AI model only on the data from the women in the UK and then evaluated it on the data set from women in the US.
In this separate experiment, there was a 3.5 per cent reduction in false positives and an 8.1 per cent reduction in false negatives, “showing the model’s potential to generalize to new clinical settings while still performing at a higher level than experts”.
Notably, when making its decisions, the model received less information than human experts did.
The human experts (in line with routine practice) had access to patient histories and prior mammograms, while the model only processed the most recent anonymized mammogram with no extra information.