Saturday September 21, 2019

AI Method Can Help Treat Brain Tumours: Study

Therefore, new and precisely validated treatment approaches are urgently needed, the team noted

0
//
artificial intelligence, nobel prize
"Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society." VOA

Researchers have developed an artificial intelligence-based (AI) method for analysis of brain tumours, paving the way for individualised treatment of tumours.

According to the study, published in the The Lancet Oncology, AI machine learning methods, carefully trained on standard magnetic resonance imaging (MRI), are more reliable and precise than established radiological methods in the treatment of gliomas.

Glioma, a type of tumour that occurs in the brain and spinal cord, is common and most malignant of brain tumours in adults.

“With this study, we were able to demonstrate the potential of artificial neural networks in radiological diagnostics,” said Philipp Kickingereder from the Heidelberg University in Germany.

Representational image.

For the study, the team included 500 brain tumour patients. Using a reference database with MRI scans of patients, the algorithms automatically recognised and localised brain tumours using artificial neural networks.

The algorithms were also enabled to volumetrically measure the individual areas (contrast medium-absorbing tumour portion, peritumoral edema).

Also Read- Facebook App Developers Exposed Users’ Data: Report

“We want to advance the technology for automated high-throughput analysis of medical image data and transfer it not only to brain tumours but also to other diseases like brain metastases or multiple sclerosis,” said Klaus Maier Hein at the varsity.

Glioma tumours often cannot be completely removed by surgery. Chemotherapy or radiotherapy are only effective to a limited extent because tumours are highly resistant. Therefore, new and precisely validated treatment approaches are urgently needed, the team noted. (IANS)

Next Story

Fake Accounts On Social Media Now Able To Copy Human Behaviour

Fake accounts enabled by Artificial Intelligence (AI) on social media have evolved and are now able to copy human behaviour

0
fake, media, behaviour, artificial intelligence
Social Media Icons. VOA

Researchers, including one of Indian-origin, have found that bots or fake accounts enabled by Artificial Intelligence (AI) on social media have evolved and are now able to copy human behaviour to avoid detection.

For the study published in the journal First Monday, the research team from the University of Southern California examines bot behaviour during the 2018 US presidential elections compared to bot behaviour during the 2016 US elections.

“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content,” said study lead author Emilio Ferrara.

fake, media, behaviour, artificial intelligence
Artificial Intelligence (AI) on social media have evolved and are now able to copy human behaviour to avoid detection. Pixabay

The researchers studied almost 250,000 social media active users who discussed the US elections both in 2016 and 2018 and detected over 30,000 bots.

They found that bots in 2016 were primarily focused on retweets and high volumes of tweets around the same message.

However, as human social activity online has evolved, so have bots. In the 2018 election season, just as humans were less likely to retweet as much as they did in 2016, bots were less likely to share the same messages in high volume.

Bots, the researchers discovered, were more likely to employ a multi-bot approach as if to mimic authentic human engagement around an idea.

ALSO READ: How E-Learning could Help You in A Career Change

Also, during the 2018 elections, as humans were much more likely to try to engage through replies, bots tried to establish the voice and add to the dialogue and engage through the use of polls, a strategy typical of reputable news agencies and pollsters, possibly aiming at lending legitimacy to these accounts.

In one example, a bot account posted an online Twitter poll asking if federal elections should require voters to show ID at the polls. It then asked Twitter users to vote and retweet.

“We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences,” Ferrara said. (IANS)