Saturday February 16, 2019
Home Lead Story WhatsApp Part...

WhatsApp Partners With DEF To Train Community Leaders in Order To Tackle Fake News

The teams will also cover key states such as, West Bengal, Assam, Karnataka, Maharashtra, Tripura and Jharkhand by March 2019, the statement said

0
//
WhatsApp
WhatsApp working on fingerprint authentication for chats: Report. Pixabay

Aiming to address the challenge of misinformation during the upcomimg Rajasthan Assembly polls, WhatsApp on Monday conducted training for community leaders here, in partnership with the Digital Empowerment Foundation (DEF), the Facebook-owned mobile messaging platform said.

The education workshop encouraged WhatsApp users to see themselves as “agents of change” by addressing socio-behavioural change and empowered them to spot false news.

The training will also enable them to differentiate between rumours and opinions; shared subsequent steps to tackle instances of false news and tips to stay safe on WhatsApp.

“WhatsApp is proud to have played a part in helping millions of people in Rajasthan to freely connect with their loved ones anywhere in the world. These trainings are a key part of our strategy to help people stay safe and limit the spread of harmful rumours this election season” said Ben Supple, Public Policy Manager, WhatsApp, in a statement.

The curriculum further delved into how users can contact fact-checking organisations like Altnews and Boom Live to accurately verify information when they are in doubt.

The training was attended by over 100 participants including from local government administrations, law enforcement authorities, college students, NGOs and community leaders who are dedicated to the technological empowerment of their society, especially villages and semi-urban centre.

WhatsApp
WhatsApp on a smartphone device.

“While the problem of misinformation is not restricted to rural areas alone, it is the rural population that majorly lacks access to alternative news sources for sake of verification,” said Osama Manzar, Founder and Director, DEF.

“We see education as the only solution to this problem, and we know that when we teach them some basic verification techniques, they’re going to tell at least two other people about it, creating a ripple effect and potentially fighting misinformation.”

Additionally, WhatsApp and DEF will organise workshops as a part of their Community Information Resource Centre (CIRC), where they will conduct training sessions targeted at grassroots communities in rural areas across five states in India, the company said.

In August, WhatsApp was asked by the central government to take steps to stop the spread of disinformation on its platform.

Also Read- Migraine With Visual Aura May Increase Risk of Irregular Heartbeat

WhatsApp roped in New Delhi-based non-profit DEF and initiated a series of educational workshops in 10 key election states including Madhya Pradesh, Chhattisgarh, Mizoram, Rajasthan and Telangana.

The teams will also cover key states such as, West Bengal, Assam, Karnataka, Maharashtra, Tripura and Jharkhand by March 2019, the statement said. (IANS)

Next Story

Musk-founded AI Group Not to Release Software on ‘Fake News’ Fears

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies

0
Tesla CEO Elon musk, board
Tesla CEO Elon Musk. (VOA)

Elon Musk-founded non-profit Artificial Intelligence (AI) research group OpenAI has decided not to reveal its new AI software in detail, fearing the AI-based model can be misused by bad actors in creating real-looking fake news.

Dubbed as “GPT2”, the AI-based automated text generator can produce fake news articles and abusive posts after being fed with a few pieces of data.

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text and performs rudimentary reading comprehension, machine translation, question answering and summarization – “all without task-specific training,” OpenAI said in a blog post late on Thursday.

Trained on a data set of eight million web pages, “GPT2” can adapt to the style and the content of the text you feed it.

OpenAI said the AI model is so good and the risk of malicious use is so high that it is not releasing the full research to the public.

However, the non-profit has created a smaller model that lets researchers experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

Elon Musk, CEO of SpaceX. Wikimedia Commons

“We can imagine the application of these models for malicious purposes, including the following: Generate misleading news articles, impersonate others online, automate the production of abusive or faked content to post on social media and automate the production of spam/phishing content,” said OpenAI.

Today, malicious actors – some of which are political in nature – have already begun to target the shared online commons, using things like “robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed”.

OpenAI further said that we should consider how research into the generation of synthetic images, videos, audio and text may further combine to unlock new as-yet-unanticipated capabilities for these bad actors.

Also Read- Adults With Obstructive Sleep At Greater Risk Of Cardiovascular Diseases

Musk, who is the staunch critic of AI and co-founded OpenAI in 2016, stepped down from its board in 2018.

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies. (IANS)