Wednesday November 13, 2019
Home Lead Story Adobe Trainin...

Adobe Training AI to Detect Images Edited Using Photoshop

Adobe's Photoshop software was originally released in 1990

0
//
The headquarters of Adobe Systems in San Jose, California
The headquarters of Adobe Systems in San Jose, California. Wikimedia Commons

Adobe, along with researchers from the University of California, Berkeley, have trained Artificial Intelligence (AI) to detect facial manipulation in images edited using the Photoshop software.

At a time when deepfake visual content is getting commoner and more deceptive, the decision is also intended to make image forensics understandable for everyone.

“This new research is part of a broader effort across Adobe to better detect image, video, audio and document manipulations,” the company wrote in a blog-post on Friday.

On testing, it was found that while human eyes were able to judge the altered face 53 per cent of the time, the the trained neural network tool achieved results as high as 99 per cent.

Adobe, AI, Photoshopped
Adobe, along with researchers from the University of California, Berkeley, have trained Artificial Intelligence (AI) to detect facial manipulation in images edited using the Photoshop software. Pixabay

The tool also identified specific areas and methods of facial warping.

Adobe’s execution in detecting facial manipulation came just days after doctored videos of Facebook CEO Mark Zuckerberg and US Speaker Nancy Pelosi made the rounds on social media as well as news channels.

Also Read- Artificial Intelligence, Machine Learning Help Shrimp, Vegetable Farmers Reap Good Harvest

“This is an important step in being able to detect certain types of image editing, and the undo capability works surprisingly well. Beyond technologies like this, the best defence will be a sophisticated public who know that content can be manipulated, often to delight them, but sometimes to mislead them as well,” said Gavin Miller, Head of Research, Adobe.

Adobe’s Photoshop software was originally released in 1990. (IANS)

Next Story

Researchers Use AI To Turn (2D) Images Into 3D

Artificial Intelligence can now be used to convert 2D images into 3D

0
AI
Researchers used AI to turn two-dimensional (2D) images into stacks of virtual three-dimensional (3D) slices showing activity inside organisms. Pixabay

A team of researchers has used Artificial Intelligence (AI) to turn two-dimensional (2D) images into stacks of virtual three-dimensional (3D) slices showing activity inside organisms.

Using deep learning techniques, the team from University of California, Los Angeles (UCLA) devised a technique that extends the capabilities of fluorescence microscopy, which allows scientists to precisely label parts of living cells and tissue with dyes that glow under special lighting.

In a study published in the journal Nature Methods, the scientists also reported that their framework, called “Deep-Z,” was able to fix errors or aberrations in images, such as when a sample is tilted or curved.

Further, they demonstrated that the system could take 2D images from one type of microscope and virtually create 3D images of the sample as if they were obtained by another, more advanced microscope.

“This is a very powerful new method that is enabled by deep learning to perform 3D imaging of live specimens, with the least exposure to light, which can be toxic to samples,” said senior author Aydogan Ozcan, UCLA chancellor’s professor of electrical and computer engineering.

In addition to sparing specimens from potentially damaging doses of light, this system could offer biologists and life science researchers a new tool for 3D imaging that is simpler, faster and much less expensive than current methods.

AI
3D imaging that shows the activity inside organisms has been made easier with the use of AI. Pixabay

The opportunity to correct for aberrations may allow scientists studying live organisms to collect data from images that otherwise would be unusable.

Investigators could also gain virtual access to expensive and complicated equipment, said researchers.

“Deep-Z” was taught using experimental images from a scanning fluorescence microscope, which takes pictures focused at multiple depths to achieve 3D imaging of samples.

In thousands of training runs, the neural network learned how to take a 2D image and infer accurate 3D slices at different depths within a sample.

Then, the framework was tested blindly – fed with images that were not part of its training, with the virtual images compared to the actual 3D slices obtained from a scanning microscope, providing an excellent match.

The researchers also found that Deep-Z could produce 3D images from 2D surfaces where samples were tilted or curved.

Also Read- Bariatric Surgery Leads To Nutritional Deficiency

“This feature was actually very surprising,” said Yichen Wu, a UCLA graduate student who is co-first author of the publication. “With it, you can see through curvature or other complex topology that is very challenging to image.” (IANS)