Monday April 23, 2018
Home India Walt Disney r...

Walt Disney releases an interactive video of ‘The Jungle Book’

0
//
138

The Walt Disney Studios has released an interactive 360 video and virtual reality experience inspired by its upcoming live-action adaptation of The Jungle Book — exclusively on Facebook.

Viewers are plopped into Mowgli’s metaphorical shoes as he meets the mesmerizing snake Kaa (voiced by Scarlett Johannson) for the first time in “Through Mowgli’s Eyes Part 1: Trust in Me,” which arrived on The Jungle Book Facebook page Thursday and is available as a VR experience in select AMC IMAX theaters across the country.

 Following the unprecedented simultaneous release, the studio will release additional content that will transport audiences deeper into the film’s world. Director Jon Favreau personally oversaw production and direction of both experiences, working closely with the VFX teams that created the incredible CG animal characters and settings of the film.

The “Mowgli” VR tour in IMAX will hit AMC Theatres in Los Angeles, Houston and Boston (3/24-3/27); San Diego, Kansas City and New York (4/1-4/3); San Francisco, Chicago and Philadelphia (4/8-4/10); Seattle, Minneapolis and Washington D.C. (4/14-4/17).

Credits: animation magazine

Next Story

Google AI can focus on individual speakers in a crowd

The visual signal not only improves the speech separation quality significantly in cases of mixed speech, but, importantly, it also associates the separated, clean speech tracks

0
//
22
Google india launches 'Tz' to help people pay their utility bills. Wikimedia Commons
Google AI to identify speakers from crowd. Wikimedia Commons

Just as most smartphone cameras now allow users to focus on a single object among many, it may soon be possible to pick out individual voices in a crowd by suppressing all other sounds, thanks to a new Artificial Intelligence (AI) system developed by Google researchers.

This is an important development as computers as not as good as humans at focusing their attention on a particular person in a noisy environment. Known as the cocktail party effect, the capability to mentally “mute” all other voices and sounds comes natural to us humans.

Google has collaborated with getty images. Wikimedia Commons
Google AI will identify individual speakers now. Wikimedia Commons

However, automatic speech separation — separating an audio signal into its individual speech sources — remains a significant challenge for computers, Inbar Mosseri and Oran Lang, software engineers at Google Research, wrote in a blog post this week. In a new paper, the researchers presented a deep learning audio-visual model for isolating a single speech signal from a mixture of sounds such as other voices and background noise.

“In this work, we are able to computationally produce videos in which speech of specific people is enhanced while all other sounds are suppressed,” Mosseri and Lang said. The method works on ordinary videos with a single audio track, and all that is required from the user is to select the face of the person in the video they want to hear, or to have such a person be selected algorithmically based on context.

Also Read: Want To Know What Facebook, Google Know About You?

The researchers believe this capability can have a wide range of applications, from speech enhancement and recognition in videos, through video conferencing, to improved hearing aids, especially in situations where there are multiple people speaking. “A unique aspect of our technique is in combining both the auditory and visual signals of an input video to separate the speech,” the researchers said.

google
This will also help in speech enhancement . VOA

“Intuitively, movements of a person’s mouth, for example, should correlate with the sounds produced as that person is speaking, which in turn can help identify which parts of the audio correspond to that person,” they explained.

The visual signal not only improves the speech separation quality significantly in cases of mixed speech, but, importantly, it also associates the separated, clean speech tracks with the visible speakers in the video, the researchers said. IANS

Next Story