Tech major IBM and Symrise, one of the top global producers of flavours and fragrances, have created industry’s first Artificial Intelligence (AI)-designed perfumes for sale.
The AI-based system named “Philyra” can learn about perfume formulas, raw materials, historical success data and industry trends, IBM Research said in a statement late on Saturday.
“Building on previous IBM research using AI to pair flavours and for recipe creation, as well as our new IBM Research AI for Product Composition, we created Philyra,” said Richard Goodwin, Principal Research Scientist, IBM Research.
The AI tool uses new and advanced Machine Learning (ML) algorithms to sift through hundreds of thousands of formulas and thousands of raw materials, helping identify patterns and novel combinations.
“Philyra does more than serve up inspiration – it can design entirely new fragrance formulas by exploring the entire landscape of fragrance combinations to discover the whitespaces in the global fragrance market,” Goodwin added.
When it comes to new perfume design, “Philyra” learns a distance model to identify fragrances that are close in smell to existing fragrances.
The larger the distance between a fragrance and its neighbours, the more novel the perfume is predicted to be.
Symrise has used “Philyra” to design two perfumes, scheduled to launch in mid-2019.
Symrise’s long-term goal is to introduce this technology to their master perfumers around the globe and continue to use the solution for the design of fragrances for personal care and home care products.
In a ray of hope for those who have to go for breast cancer screening and even for healthy women who get false alarms during digital mammography, an Artificial Intelligence (AI)-based Google model has left radiologists behind in spotting breast cancer by just scanning the X-ray results.
Reading mammograms is a difficult task, even for experts, and can often result in both false positives and false negatives.
In turn, these inaccuracies can lead to delays in detection and treatment, unnecessary stress for patients and a higher workload for radiologists who are already in short supply, Google said in a blog post on Wednesday.
Google’s AI model spotted breast cancer in de-identified screening mammograms (where identifiable information has been removed) with greater accuracy, fewer false positives and fewer false negatives than experts.
“This sets the stage for future applications where the model could potentially support radiologists performing breast cancer screenings,” said Shravya Shetty, Technical Lead, Google Health.
Digital mammography or X-ray imaging of the breast, is the most common method to screen for breast cancer, with over 42 million exams performed each year in the US and the UK combined.
“But despite the wide usage of digital mammography, spotting and diagnosing breast cancer early remains a challenge,” said Daniel Tse, Product Manager, Google Health.
Together with colleagues at DeepMind, Cancer Research UK Imperial Centre, Northwestern University and Royal Surrey County Hospital, Google set out to see if AI could support radiologists to spot the signs of breast cancer more accurately.
The findings, published in the journal Nature, showed that AI could improve the detection of breast cancer.
Google AI model was trained and tuned on a representative data set comprised of de-identified mammograms from more than 76,000 women in the UK and more than 15,000 women in the US, to see if it could learn to spot signs of breast cancer in the scans.
The model was then evaluated on a separate de-identified data set of more than 25,000 women in the UK and over 3,000 women in the US.
“In this evaluation, our system produced a 5.7 per cent reduction of false positives in the US, and a 1.2 per cent reduction in the UK. It produced a 9.4 per cent reduction in false negatives in the US, and a 2.7 per cent reduction in the UK,” informed Google.
The researchers then trained the AI model only on the data from the women in the UK and then evaluated it on the data set from women in the US.
In this separate experiment, there was a 3.5 per cent reduction in false positives and an 8.1 per cent reduction in false negatives, “showing the model’s potential to generalize to new clinical settings while still performing at a higher level than experts”.
Notably, when making its decisions, the model received less information than human experts did.
The human experts (in line with routine practice) had access to patient histories and prior mammograms, while the model only processed the most recent anonymized mammogram with no extra information.