Online video giant YouTube is introducing a major change to its music chart system after discovering that artists and labels were using growth hacks to inflate how many people were watching their videos.
YouTube is no longer counting “advertising views” when it comes to the company calculating its music charts. Instead, ranking for top-watched music videos will be based on organic plays, The Verge quoted a new blog-post as saying on Friday.
“In an effort to provide more transparency to the industry and align with the policies of official charting companies such as Billboard and Nielsen, we are no longer counting paid advertising views on YouTube in the YouTube Music Charts calculation. Artists will now be ranked based on view counts from organic plays,” the company wrote in a blog-post.
It is changing its methodology for reporting on 24-hour record debuts to also only count views from organic sources, including direct links to the video, search results.
“Videos eligible for YouTube’s 24-hour record debuts are those with the highest views from organic sources within the first 24 hours of the video’s public release. This includes direct links to the video, search results, external sites that embed the video and YouTube features like the homepage, watch next and Trending,” the blog-post added.
As tech firms scramle to tackle the spread of deepfake videos online, a new research has claimed 96 per cent of such videos contain pornographic material targeting female celebrities.
The researchers from Deeptrace, a Netherland-based cybersecurity company, also found that top four websites dedicated to deepfake pornography received more than 134 million views on videos.
“This significant viewership demonstrates a market for websites creating and hosting deepfake pornography, a trend that will continue to grow unless decisive action is taken,” said Giorgio Patrini, Founder, CEO and Chief Scientist at Deeptrace.
“The rise of synthetic media and deepfakes is forcing us towards an important and unsettling realization: our historical belief that video and audio are reliable records of reality is no longer tenable,” he added.
“Deepfakes” are video forgeries that make people appear to be saying things they never did, like the popular forged videos of Facebook CEO Mark Zuckerberg and US House Speaker Nancy Pelosi that went viral recently.
Facebook has partnered with Microsoft, Massachusetts Institute of Technology (MIT) and other institutions to fight ‘deepfakes’ and has committed $10 million towards creating open source tools that can better detect if a video has been doctored.
“Deepfake” techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online.
Since its foundation in 2018, Deeptrace has been dedicated to researching deepfakes’ evolving capabilities and threats, providing crucial intelligence for enhancing its detection technology.
The research revealed that the deepfake phenomenon is growing rapidly online, with the number of deepfake videos almost doubling over the last seven months to 14,678.
This increase is supported by the growing commodification of tools and services that lower the barrier for non-experts to create deepfakes.
“Perhaps unsurprisingly, we observed a significant contribution to the creation and use of synthetic media tools from web users in China and South Korea, despite the totality of our sources coming from the English-speaking Internet,” Patrini said in a statement.
Deepfakes are also making a significant impact on the political sphere.
“Outside of politics, the weaponization of deepfakes and synthetic media is influencing the cybersecurity landscape, enhancing traditional cyber threats and enabling entirely new attack vectors,” said the company.
To fight the growing menace, Facebook, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, University of California-Berkeley, University of Maryland, College Park, and University at Albany-SUNY are coming together to build the Deepfake Detection Challenge (DFDC).
According to Professor Rama Chellappa from University of Maryland, “given the recent developments in being able to generate manipulated information (text, images, videos, and audio) at scale, we need the full involvement of the research community in an open environment to develop methods and systems that can detect and mitigate the ill effects of manipulated multimedia”. (IANS)