Tuesday July 23, 2019
Home Indian Diaspora A 22-year-old...

A 22-year-old Sikh mistaken for being Muslim gets Abused and Harassed at US store in Massachusetts

A 22 year-old Sikh was harassed in a store after a man thinks that he is a Muslim

0
//
People of Sikh community (Representational Image), Pixabay

Boston, November 21, 2016: Everyone is unique, no matter what community they belong to or the language they speak. But some people fail to share the same view and think otherwise.

Some people generalise characteristics of a community and think that all belonging to the same community will believe or follow the same direction. This leads to hateful marginalisation and harassment. After Donald trump emerged victorious in the 2016 US presidential elections, the number of cases of such hateful harassment have increased and over 200 incidents have been reported, mentioned PTI.

NewsGram brings to you current foreign news from all over the world.

[bctt tweet=”Harmann Singh was confronted by a man who called him a ‘(expletive) Muslim.’  ” username=””]

A 22-year-old Sikh student at the Harvard Law School has come across such incident of discrimination. He was harassed on November 11 at a store in the vicinity of the campus by someone who assumed that he was Muslim. Harmann Singh, from Buffalo, New York is a first-year law student at Harvard and was speaking to his mother on the phone during the entire incident.

“Over the weekend, I was confronted by a man who called me a ‘(expletive) Muslim’ and followed me around a store aggressively asking where I was from, and no one in the store said a thing. I was on the phone with my mom the entire time, and we were both concerned for my safety as this man stood inches away from me,” Singh wrote in The Boston Globe.

NewsGram brings to you top news around the world today.

He also wrote that “While deeply painful, what happened to me pales in comparison to the hate and violence many of my brothers and sisters have faced across the country.” Singh said that the man was following him all around the store and kept asking him where he was from while harassing him. Singh tried to ignore the man and continue his conversation on the phone, mentioned PTI report.

The owner of the store said that he did see the man who spoke to Harmann and intended to ask the man to leave but he was at the back of the store when the incident occurred and both of them had left when he returned.

Check out NewsGram for latest international news updates.

The owner also said that he did not know who the man was and was hoping to never see him again.

Harmann said that the most efficient way to encounter such marginalisation is to be there for each other. He said that even a bystander who interrupts to check in with the victim being harassed can make a difference.

-prepared by Shivam Thaker of NewsGram. Twitter: @Shivam_Thaker

Next Story

Scientists Develop AI Tool to Detect Racial, Gender Discrimination

"To avoid discrimination on the basis of race, gender or other attributes you need effective tools for detecting discrimination. Our tool can help with that," he said

0
AI
"We're beginning to see the first instances of artificial intelligence operating as a mediator between humans, but it's a question of: 'Do people want that?" Pixabay

Scientists have developed a new artificial intelligence (AI) tool for detecting unfair discrimination — such as on the basis of race or gender.

Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, has been a long-standing concern of civilised societies.

However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging.

“Artificial intelligence systems — such as those involved in selecting candidates for a job or for admission to a university — are trained on large amounts of data,” said Vasant Honavar, a professor at Pennsylvania State University (Penn State) in the US.

“But if these data are biased, they can affect the recommendations of AI systems,” Honavar said.

He said if a company historically has never hired a woman for a particular type of job, then an AI system trained on this historical data will not recommend a woman for a new job.

“There’s nothing wrong with the machine learning algorithm itself,” said Honavar.

“It’s doing what it’s supposed to do, which is to identify good job candidates based on certain desirable characteristics. But since it was trained on historical, biased data it has the potential to make unfair recommendations,” he said.

The team created an AI tool for detecting discrimination with respect to a protected attribute, such as race or gender, by human decision makers or AI systems.

artificial intelligence, nobel prize
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society.” VOA

“We can minimise gender-based discrimination in salary if we ensure that similar men and women receive similar salaries,” said Aria Khademi, graduate student at Penn State.

The researchers tested their method using various types of available data, such as income data from the US Census Bureau to determine whether there is gender-based discrimination in salaries.

They also tested their method using the New York City Police Department’s stop-and-frisk programme data to determine whether there is discrimination against people of colour in arrests made after stops.

“We analysed an adult income data set containing salary, demographic and employment-related information for close to 50,000 individuals,” said Honavar.

“We found evidence of gender-based discrimination in salary. Specifically, we found that the odds of a woman having a salary greater than USD 50,000 per year is only one-third that for a man.

“This would suggest that employers should look for and correct, when appropriate, gender bias in salaries,” he said.

Also Read: Apple Releases Silent Update for Mac Users to Fix Faulty Video Conferencing App

Although the team’s analysis of the New York stop-and-frisk dataset — which contains demographic and other information about drivers stopped by the New York City police force — revealed evidence of possible racial bias against Hispanics and African American individuals, it found no evidence of discrimination against them on average as a group.

“You cannot correct for a problem if you don’t know that the problem exists,” said Honavar.

“To avoid discrimination on the basis of race, gender or other attributes you need effective tools for detecting discrimination. Our tool can help with that,” he said. (IANS)