Tuesday June 18, 2019
Home Business Microsoft&#82...

Microsoft’s Cortana and Adobe join hands to provide Artificial Intelligence (AI) based services

The two tech giants were working on standard data models and sharing of core libraries between Adobe's Sensei and Microsoft's Cortana, both based on AI

0
//
artificial intelligence, (representartional Image), wikimedia

Las Vegas, March 23, 2017: Adobe and Microsoft are jointly working on artificial intelligence (AI) to offer better products and provide customers more automated, intelligence-based experiences, a top Adobe official said here.

Brad Rencher, executive vice-president and general manager, marketing, of Adobe, said that the two tech giants were working on standard data models and sharing of core libraries between Adobe’s Sensei and Microsoft’s Cortana, both based on AI.

Cortana is a search tool which can verbally provide answers to search queries and Sensei – a set of intelligent services from Adobe – integrates the advertising, marketing and analytics products offered on Cloud with back up of creatives and documentation.

NewsGram brings to you current foreign news from all over the world.
Rencher, who was talking to a group of journalists here at the Adobe’s annual summit, said that the joint research and development would combine the specific domain capabilities of Sensei with the wider core data platform of Cortana, thus building a service.

Adobe products can now use data from Microsoft Dynamics 365, Microsoft Power BI and Microsoft Azure into Sensei for intelligent machine learning.

Sensei will soon enter into Microsoft tools.

Rencher, however, said no discussion had taken place on how to monetise the collaboration.

NewsGram brings to you top news around the world today.

Talking of Adobe’s presence in India, Rencher said that it was the fastest growing market and they have had a very substantial amount of the company’s research taking place in India, including on Sensei.

Rencher also said that large Indian companies are rapidly adopting Adobe’s products and Cloud offerings.

“Reliance Industries was looking at how to integrate data across all its various divisions and Adobe had helped a very old newspaper, Malayala Manorama, to completely digitise its functions across the board,” noted Rencher.

Despite the enormous amount of research taking place on AI, he said that he did not believe that it could replace the creative side of human beings.

“What AI can do is reduce the time taken in intelligent data crunching and sometimes understanding what went wrong very quickly,” Rencher added.

Check out NewsGram for latest international news updates.

“By cutting six months of manual research to, say two minutes, it adds huge amount of strength to the creative aspects of human beings,” he noted. (IANS)

Next Story

Researchers Teaching Artificial Intelligence to Connect Senses Like Vision and Touch

The new AI-based system can create realistic tactile signals from visual inputs

0
Tool, Humans, Robots
Members of that same MIT team applied the new algorithm to the BMW factory floor experiments and found that instead of freezing in place, the robot simply rolled on . Pixabay

A team of researchers at the Massachusetts Institute of Technology (MIT) have come up with a predictive Artificial Intelligence (AI) that can learn to see by touching and to feel by seeing.

While our sense of touch gives us capabilities to feel the physical world, our eyes help us understand the full picture of these tactile signals.

Robots, however, that have been programmed to see or feel can’t use these signals quite as interchangeably.

The new AI-based system can create realistic tactile signals from visual inputs, and predict which object and what part is being touched directly from those tactile inputs.

Teaching, Artificial Intelligence, Researchers
) A team of researchers at the Massachusetts Institute of Technology (MIT) have come up with a predictive Artificial Intelligence (AI). Pixabay

In the future, this could help with a more harmonious relationship between vision and robotics, especially for object recognition, grasping, better scene understanding and helping with seamless human-robot integration in an assistive or manufacturing setting.

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, said Yunzhu Li, PhD student and lead author from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

“By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings,” Li added.

The team used a KUKA robot arm with a special tactile sensor called GelSight, designed by another group at MIT.

Also Read- G20 Environment Ministers Agree to Tackle Marine Plastic Waste

Using a simple web camera, the team recorded nearly 200 objects, such as tools, household products, fabrics, and more, being touched more than 12,000 times.

Breaking those 12,000 video clips down into static frames, the team compiled “VisGel,” a dataset of more than three million visual/tactile-paired images.

“Bringing these two senses (vision and touch) together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects,” said Li.

The current dataset only has examples of interactions in a controlled environment.

Teaching, Artificial Intelligence, Researchers
While our sense of touch gives us capabilities to feel the physical world, our eyes help us understand the full picture of these tactile signals. Pixabay

The team hopes to improve this by collecting data in more unstructured areas, or by using a new MIT-designed tactile glove, to better increase the size and diversity of the dataset.

“This is the first method that can convincingly translate between visual and touch signals”, said Andrew Owens, a post-doc at the University of California at Berkeley.

Also Read- Scholarship Scam: How Officials, Institutions, Banks Deprive Poor Students to Pursue Basic Education?

The team is set to present the findings next week at the “Conference on Computer Vision and Pattern Recognition” in Long Beach, California. (IANS)