Los Angeles: “The Jungle Book”, starring Neel Sethi a 12-year-old boy as Mowgli, is the upcoming adventure fantasy 3D film of filmmaker Jon Favreau’s which will show the Indian-American boy who survives in the jungle among various species of animals.
The trailer of “The Jungle Book” was released on Sunday during the Super Bowl broadcast.
The trailer opens with Mowgli, who is raised by Indian wolves, being chased by a black panther who gains ground and leaps on top of him.
The black panther Bagheera, voiced by actor Ben Kingsley, stood over Mowgli and said, “If you can’t learn to run with the pack, one of these days you’ll be someone’s dinner.”
The Bengal tiger Shere Khan, voiced by Idris Elba, was then seen sniffing out Mowgli during a gathering of jungle dwellers.
Mowgli was suddenly chased by Shere Khan and narrowly escaped.
He then embarked on a journey of self discovery and along the way was lured by the seductive python Kaa, voiced by Scarlett Johansson, and also contended with the silver-tongued orangutan King Louie, voiced by Christopher Walken.
Mowgli was aided by Bagheera as well as the friendly bear Baloo, voiced by Bill Murray, as he faced dangers in the jungle.
A remake of the 1967 film of the same name, “The Jungle Book” is set to release on April 15.(IANS)(image: youtube.com)
A team of researchers has used Artificial Intelligence (AI) to turn two-dimensional (2D) images into stacks of virtual three-dimensional (3D) slices showing activity inside organisms.
Using deep learning techniques, the team from University of California, Los Angeles (UCLA) devised a technique that extends the capabilities of fluorescence microscopy, which allows scientists to precisely label parts of living cells and tissue with dyes that glow under special lighting.
In a study published in the journal Nature Methods, the scientists also reported that their framework, called “Deep-Z,” was able to fix errors or aberrations in images, such as when a sample is tilted or curved.
Further, they demonstrated that the system could take 2D images from one type of microscope and virtually create 3D images of the sample as if they were obtained by another, more advanced microscope.
“This is a very powerful new method that is enabled by deep learning to perform 3D imaging of live specimens, with the least exposure to light, which can be toxic to samples,” said senior author Aydogan Ozcan, UCLA chancellor’s professor of electrical and computer engineering.
In addition to sparing specimens from potentially damaging doses of light, this system could offer biologists and life science researchers a new tool for 3D imaging that is simpler, faster and much less expensive than current methods.
The opportunity to correct for aberrations may allow scientists studying live organisms to collect data from images that otherwise would be unusable.
Investigators could also gain virtual access to expensive and complicated equipment, said researchers.
“Deep-Z” was taught using experimental images from a scanning fluorescence microscope, which takes pictures focused at multiple depths to achieve 3D imaging of samples.
In thousands of training runs, the neural network learned how to take a 2D image and infer accurate 3D slices at different depths within a sample.
Then, the framework was tested blindly – fed with images that were not part of its training, with the virtual images compared to the actual 3D slices obtained from a scanning microscope, providing an excellent match.
The researchers also found that Deep-Z could produce 3D images from 2D surfaces where samples were tilted or curved.
“This feature was actually very surprising,” said Yichen Wu, a UCLA graduate student who is co-first author of the publication. “With it, you can see through curvature or other complex topology that is very challenging to image.” (IANS)