Tuesday July 23, 2019
Home Lead Story Researchers T...

Researchers Teaching Artificial Intelligence to Connect Senses Like Vision and Touch

The new AI-based system can create realistic tactile signals from visual inputs

0
//
hackers, AI
In this method, instructions are given to the companies staff members to perform transactions such as money transfers, as well as malicious activity on the company's network. Pixabay

A team of researchers at the Massachusetts Institute of Technology (MIT) have come up with a predictive Artificial Intelligence (AI) that can learn to see by touching and to feel by seeing.

While our sense of touch gives us capabilities to feel the physical world, our eyes help us understand the full picture of these tactile signals.

Robots, however, that have been programmed to see or feel can’t use these signals quite as interchangeably.

The new AI-based system can create realistic tactile signals from visual inputs, and predict which object and what part is being touched directly from those tactile inputs.

Teaching, Artificial Intelligence, Researchers
) A team of researchers at the Massachusetts Institute of Technology (MIT) have come up with a predictive Artificial Intelligence (AI). Pixabay

In the future, this could help with a more harmonious relationship between vision and robotics, especially for object recognition, grasping, better scene understanding and helping with seamless human-robot integration in an assistive or manufacturing setting.

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, said Yunzhu Li, PhD student and lead author from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

“By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings,” Li added.

The team used a KUKA robot arm with a special tactile sensor called GelSight, designed by another group at MIT.

Also Read- G20 Environment Ministers Agree to Tackle Marine Plastic Waste

Using a simple web camera, the team recorded nearly 200 objects, such as tools, household products, fabrics, and more, being touched more than 12,000 times.

Breaking those 12,000 video clips down into static frames, the team compiled “VisGel,” a dataset of more than three million visual/tactile-paired images.

“Bringing these two senses (vision and touch) together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects,” said Li.

The current dataset only has examples of interactions in a controlled environment.

Teaching, Artificial Intelligence, Researchers
While our sense of touch gives us capabilities to feel the physical world, our eyes help us understand the full picture of these tactile signals. Pixabay

The team hopes to improve this by collecting data in more unstructured areas, or by using a new MIT-designed tactile glove, to better increase the size and diversity of the dataset.

“This is the first method that can convincingly translate between visual and touch signals”, said Andrew Owens, a post-doc at the University of California at Berkeley.

Also Read- Scholarship Scam: How Officials, Institutions, Banks Deprive Poor Students to Pursue Basic Education?

The team is set to present the findings next week at the “Conference on Computer Vision and Pattern Recognition” in Long Beach, California. (IANS)

Next Story

Researchers Develop Artificial Intelligence Tool in Chest X-Rays to Predict Long Term Mortality

Each image was paired with a key piece of data: Did the person die over a 12-year period?

0
artificial intelligence
The goal was for CXR-risk to learn the features or combinations of features on a chest X-ray image that best predict health and mortality. Pixabay

Researchers have developed an Artificial Intelligence (AI)-powered tool that can harvest information in chest X-rays to predict long-term mortality.

The findings of this study, published in the journal JAMA Network Open, could help to identify patients most likely to benefit from screening and preventive medicine for heart disease, lung cancer and other conditions.

“This is a new way to extract prognostic information from everyday diagnostic tests,” said one of the researchers, Michael Lu, from Massachusetts General Hospital (MGH) of Harvard Medical School. “It’s information that’s already there that we’re not using, that could improve people’s health,” Lu said. Lu and his colleagues developed a convolutional neural network – an AI tool for analysing visual information – called CXR-risk.

artificial Intelligence
Next, Lu and colleagues tested CXR-risk using chest X-rays for 16,000 patients from two earlier clinical trials. Pixabay

It was trained by having the network analyse more than 85,000 chest X-rays from 42,000 participants who took part in an earlier clinical trial. Each image was paired with a key piece of data: Did the person die over a 12-year period? The goal was for CXR-risk to learn the features or combinations of features on a chest X-ray image that best predict health and mortality.

ALSO READ: Why Virtual Reality Headsets Failed to Create Craze Among Masses?

Next, Lu and colleagues tested CXR-risk using chest X-rays for 16,000 patients from two earlier clinical trials. They found that 53 per cent of people the neural network identified as “very high risk” died over 12 years, compared to fewer than four per cent of those that CXR-risk labeled as “very low risk.”

The study found that CXR-risk provided information that predicts long-term mortality, independent of radiologists’ readings of the x-rays and other factors, such as age and smoking status. Lu believes this new tool will be even more accurate when combined with other risk factors, such as genetics and smoking status. (IANS)