Monday April 22, 2019

AI Method Can Help Treat Brain Tumours: Study

Therefore, new and precisely validated treatment approaches are urgently needed, the team noted

0
//
artificial intelligence, nobel prize
"Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society." VOA

Researchers have developed an artificial intelligence-based (AI) method for analysis of brain tumours, paving the way for individualised treatment of tumours.

According to the study, published in the The Lancet Oncology, AI machine learning methods, carefully trained on standard magnetic resonance imaging (MRI), are more reliable and precise than established radiological methods in the treatment of gliomas.

Glioma, a type of tumour that occurs in the brain and spinal cord, is common and most malignant of brain tumours in adults.

“With this study, we were able to demonstrate the potential of artificial neural networks in radiological diagnostics,” said Philipp Kickingereder from the Heidelberg University in Germany.

Representational image.

For the study, the team included 500 brain tumour patients. Using a reference database with MRI scans of patients, the algorithms automatically recognised and localised brain tumours using artificial neural networks.

The algorithms were also enabled to volumetrically measure the individual areas (contrast medium-absorbing tumour portion, peritumoral edema).

Also Read- Facebook App Developers Exposed Users’ Data: Report

“We want to advance the technology for automated high-throughput analysis of medical image data and transfer it not only to brain tumours but also to other diseases like brain metastases or multiple sclerosis,” said Klaus Maier Hein at the varsity.

Glioma tumours often cannot be completely removed by surgery. Chemotherapy or radiotherapy are only effective to a limited extent because tumours are highly resistant. Therefore, new and precisely validated treatment approaches are urgently needed, the team noted. (IANS)

Next Story

Microsoft Rejects California Law Enforcement Agency’s Request To Install Facial Recognition in Officers’ Cars

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

0
microsoft
Brad Smith of Microsoft takes part in a panel discussion "Cyber, big data and new technologies. Current Internet Governance Challenges: What's Next?" at the United Nations in Geneva, Nov. 9, 2017. VOA

Microsoft recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras because of human rights concerns, company President Brad Smith said Tuesday.

Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white, male pictures.

AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

“Anytime they pulled anyone over, they wanted to run a face scan” against a database of suspects, Smith said without naming the agency. After thinking through the uneven impact, “we said this technology is not your answer.”

microsoft
Microsoft said in December it would be open about shortcomings in its facial recognition and asked customers to be transparent about how they intended to use it, while stopping short of ruling out sales to police. Pixabay

Prison contract accepted

Speaking at a Stanford University conference on “human-centered artificial intelligence,” Smith said Microsoft had also declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse.

‘Race to the bottom’

Microsoft said in December it would be open about shortcomings in its facial recognition and asked customers to be transparent about how they intended to use it, while stopping short of ruling out sales to police.

Smith has called for greater regulation of facial recognition and other uses of artificial intelligence, and he warned Tuesday that without that, companies amassing the most data might win the race to develop the best AI in a “race to the bottom.”

AI
AI has more cases of mistaken identity with women and minorities, multiple research projects have found. Pixabay

He shared the stage with the United Nations High Commissioner for Human Rights, Michelle Bachelet, who urged tech companies to refrain from building new tools without weighing their impact.

Also Read: ‘Dirty Cops’ Ahead of Mueller Report Release, U.S. President Donald Trump Takes Stand

“Please embody the human rights approach when you are developing technology,” said Bachelet, a former president of Chile.

Microsoft spokesman Frank Shaw declined to name the prospective customers the company turned down. (VOA)