Saturday January 18, 2020
Home Lead Story Microsoft Rej...

Microsoft Rejects California Law Enforcement Agency’s Request To Install Facial Recognition in Officers’ Cars

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

0
//
microsoft
Brad Smith of Microsoft takes part in a panel discussion "Cyber, big data and new technologies. Current Internet Governance Challenges: What's Next?" at the United Nations in Geneva, Nov. 9, 2017. VOA

Microsoft recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras because of human rights concerns, company President Brad Smith said Tuesday.

Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white, male pictures.

AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

“Anytime they pulled anyone over, they wanted to run a face scan” against a database of suspects, Smith said without naming the agency. After thinking through the uneven impact, “we said this technology is not your answer.”

microsoft
Microsoft said in December it would be open about shortcomings in its facial recognition and asked customers to be transparent about how they intended to use it, while stopping short of ruling out sales to police. Pixabay

Prison contract accepted

Speaking at a Stanford University conference on “human-centered artificial intelligence,” Smith said Microsoft had also declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse.

‘Race to the bottom’

Microsoft said in December it would be open about shortcomings in its facial recognition and asked customers to be transparent about how they intended to use it, while stopping short of ruling out sales to police.

Smith has called for greater regulation of facial recognition and other uses of artificial intelligence, and he warned Tuesday that without that, companies amassing the most data might win the race to develop the best AI in a “race to the bottom.”

AI
AI has more cases of mistaken identity with women and minorities, multiple research projects have found. Pixabay

He shared the stage with the United Nations High Commissioner for Human Rights, Michelle Bachelet, who urged tech companies to refrain from building new tools without weighing their impact.

Also Read: ‘Dirty Cops’ Ahead of Mueller Report Release, U.S. President Donald Trump Takes Stand

“Please embody the human rights approach when you are developing technology,” said Bachelet, a former president of Chile.

Microsoft spokesman Frank Shaw declined to name the prospective customers the company turned down. (VOA)

Next Story

Researchers Develop AI Tool To Detect Mental Health Issues

Tracking changes in clinical states is important to detect if there is a change that shows whether the condition has improved or worsened that would warrant the need for changing treatment

0
AI
The USC Signal Analysis and Interpretation Lab (SAIL), which has long applied artificial intelligence (AI) and machine learning to identify and classify video, audio and physiological data, partnered with researchers to analyse voice data from patients being treated for serious mental illnesses. Pixabay

Researchers, including one of Indian-origin, have developed an artificial intelligence (AI) tool that can accurately detect changes in clinical states in voice data of patients with bipolar, schizophrenia and depressive disorders as accurately as attending doctors.

“Machine learning allowed us to illuminate the various clinically-meaningful dimensions of language use and vocal patterns of the patients over time and personalised at each individual level,” said Indian-origin researcher and study senior author Shri Narayanan from University of Southern California (USC) in the US.

The USC Signal Analysis and Interpretation Lab (SAIL), which has long applied artificial intelligence and machine learning to identify and classify video, audio and physiological data, partnered with researchers to analyse voice data from patients being treated for serious mental illnesses.

For the results, the researchers used the ‘MyCoachConnect’ interactive voice and mobile tool, created and hosted on the Chorus platform to provide voice diaries related to their mental health states.

SAIL team then collaborated with researchers to apply artificial intelligence to listen to hundreds of voicemails using custom software to detect changes in patients’ clinical states. According to the study, the AI was able to match clinicians’ ratings of their patients.

Tracking changes in clinical states is important to detect if there is a change that shows whether the condition has improved or worsened that would warrant the need for changing treatment, the researchers said.

AI
Researchers, including one of Indian-origin, have developed an artificial intelligence (AI) tool that can accurately detect changes in clinical states in voice data of patients with bipolar, schizophrenia and depressive disorders as accurately as attending doctors. Pixabay

This project builds on SAIL’s body of work in behavioural machine intelligence to analyse psychotherapy sessions to detect empathy of addiction counselors-in-training in order to improve their chances of better outcomes, in addition to the Lab’s work analysing language for cognitive diagnoses and legal processes.

ALSO READ: Here’s How Fitbit Smartwatch May Help You Predict Flu in Real-Time

“Our approach builds on that fundamental technique to hear what people are saying about using the modern AI. We hope this will help us better understand how our patients are doing and transform mental health care to be more personalised and proactive to what an individual needs,” said study lead author Armen Arevian. (IANS)