Saturday November 23, 2019
Home Lead Story Researchers T...

Researchers Teaching Artificial Intelligence to Connect Senses Like Vision and Touch

The new AI-based system can create realistic tactile signals from visual inputs

0
//
hackers, AI
In this method, instructions are given to the companies staff members to perform transactions such as money transfers, as well as malicious activity on the company's network. Pixabay

A team of researchers at the Massachusetts Institute of Technology (MIT) have come up with a predictive Artificial Intelligence (AI) that can learn to see by touching and to feel by seeing.

While our sense of touch gives us capabilities to feel the physical world, our eyes help us understand the full picture of these tactile signals.

Robots, however, that have been programmed to see or feel can’t use these signals quite as interchangeably.

The new AI-based system can create realistic tactile signals from visual inputs, and predict which object and what part is being touched directly from those tactile inputs.

Teaching, Artificial Intelligence, Researchers
) A team of researchers at the Massachusetts Institute of Technology (MIT) have come up with a predictive Artificial Intelligence (AI). Pixabay

In the future, this could help with a more harmonious relationship between vision and robotics, especially for object recognition, grasping, better scene understanding and helping with seamless human-robot integration in an assistive or manufacturing setting.

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, said Yunzhu Li, PhD student and lead author from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

“By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings,” Li added.

The team used a KUKA robot arm with a special tactile sensor called GelSight, designed by another group at MIT.

Also Read- G20 Environment Ministers Agree to Tackle Marine Plastic Waste

Using a simple web camera, the team recorded nearly 200 objects, such as tools, household products, fabrics, and more, being touched more than 12,000 times.

Breaking those 12,000 video clips down into static frames, the team compiled “VisGel,” a dataset of more than three million visual/tactile-paired images.

“Bringing these two senses (vision and touch) together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects,” said Li.

The current dataset only has examples of interactions in a controlled environment.

Teaching, Artificial Intelligence, Researchers
While our sense of touch gives us capabilities to feel the physical world, our eyes help us understand the full picture of these tactile signals. Pixabay

The team hopes to improve this by collecting data in more unstructured areas, or by using a new MIT-designed tactile glove, to better increase the size and diversity of the dataset.

“This is the first method that can convincingly translate between visual and touch signals”, said Andrew Owens, a post-doc at the University of California at Berkeley.

Also Read- Scholarship Scam: How Officials, Institutions, Banks Deprive Poor Students to Pursue Basic Education?

The team is set to present the findings next week at the “Conference on Computer Vision and Pattern Recognition” in Long Beach, California. (IANS)

Next Story

Is Oracle Digital Assistant Smarter Than Amazon Alexa? Find Out Here!

Here's Why Oracle's digital assistant better than Amazon's Alexa

0
Amazon and oracle
Amazon Alexa may lag behind Oracel Digital Assistant. Pixabay

Alexa may be your perfect living room assistant, but when it comes to specific queries with particular vocabulary from enterprises, it lags behind in rich capabilities that Oracle Digital Assistant (ODA) has to offer, a top company executive has stressed.

Oracle Digital Assistant can become your intelligent front-end — your smart router that’s able to send all your specific questions to relevant bots, according to Suhas Uliyar, VP-Product Management, Oracle Digital Assistant and Integration Cloud.

“It knows how to handle conflicts, manage security and so on and so forth. It has got Artificial Intelligence (AI) and is AI-trained so we call the routing of your questions to be relevant, and call it a skill now. Instead of bot, it’s a skill,” Uliyar told IANS during an interaction.

According to him, there are a couple of differences compared to Alexa as it works on the same model.

“Alexa is very implicit, where you have to say — Alexa, ask this skill to do something. While Oracle Digital Assistant is both explicit and implicit and you don’t need to sort of say ‘go ask the HCM (Human Capital Management) bot,’ for instance. It’ll just figure out that the question is for HCM bot and will answer accordingly,” Uliyar elaborated.

The other thing is to use the word ‘assistant’ and, according to him, we are overloading the term ‘assistant’ because if you have an ‘assistant’, she or he is smart enough to understand who you are, what your preferences are, know how you work.

“Most of the chatbots respond to a simple question and answer. Next time, it will probably remember who you are. So, the whole context is memory, and also the process side of things,” the Oracle executive added.

Oracle Digital Assistant provides the platform and tools to easily build AI-powered assistants that connect to your backend applications.

The digital assistant uses AI for natural language processing and understanding, to automate engagements with conversational interfaces that respond instantly, improve user satisfaction, and increase business efficiencies.

Most of those voice-enabled application programming interfaces (APIs) are being trained using what’s called Open Common Domain Models, which means that it understands our normal speaking style and content.

“What if an enterprise has a specific vocabulary? For example, a very common thing in Enterprise Resource Planning (ERP) is what’s EBITDA for a company. You try saying EBITDA to Alexa or any other such assistant in the market today, and you’ll most likely draw a blank,” Uliyar told IANS.

Oracle AI
Oracle digital assistant uses AI for natural language processing and understanding. Pixabay

Earnings before interest, tax, depreciation and amortization (EBITDA) is a measure of a company’s operating performance.

According to him, to serve customers with delightful experiences every single time, there has to be a lot of innovation happening — whether it’s mobile, chatbots, Blockchain, AI or AR/VR.

“With all these new realities, what enterprises really need is a platform that can pull that ‘holistic experience’ altogether. That’s sort of the topmost challenge that enterprise customers want to solve,” he noted.

Oracle Digital Assistant is very sophisticated. It has got deep learning and is based on a technology called Sequence-to-Sequence vectoring and creates what we call as logical forms of the statement.

Also Read- Facebook Loves Your Data, and Rakes in Moolah Every Year

“We call it a deep semantic parsing. It’s the underlying technology and is very different given the advancements of deep learning. We can do a much better job instead of understanding the linguistic constructs, versus in the past. This is quite a bit of advancement. We’re definitely very excited about pushing the boundaries,” said Uliyar. (IANS)