Facebook on Sunday said it will create an independent Institute for Ethics in Artificial Intelligence (AI) with an initial grant of $7.5 million.
In collaboration with the Technical University of Munich (TUM) in Germany, the institute will help advance the growing field of ethical research on new technology and will explore fundamental issues affecting the use and impact of AI.
“The institute will conduct independent, evidence-based research to provide insight and guidance for society, industry, legislators and decision-makers across the private and public sectors,” said Joaquin Quinonero Candela, Director, Applied Machine Learning, at Facebook.
The institute will address issues that affect the use and impact of AI, such as safety, privacy, fairness and transparency.
“At the TUM Institute for Ethics in Artificial Intelligence, we will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy,” said Professor Dr Christoph Lutge.
The institute will also benefit from Germany’s position at the forefront of the conversation surrounding ethical frameworks for AI “including the creation of government-led ethical guidelines on autonomous driving” and its work with European institutions on these issues. (IANS)
A team of researchers at the Massachusetts Institute of Technology (MIT) have come up with a predictive Artificial Intelligence (AI) that can learn to see by touching and to feel by seeing.
While our sense of touch gives us capabilities to feel the physical world, our eyes help us understand the full picture of these tactile signals.
Robots, however, that have been programmed to see or feel can’t use these signals quite as interchangeably.
The new AI-based system can create realistic tactile signals from visual inputs, and predict which object and what part is being touched directly from those tactile inputs.
In the future, this could help with a more harmonious relationship between vision and robotics, especially for object recognition, grasping, better scene understanding and helping with seamless human-robot integration in an assistive or manufacturing setting.
“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, said Yunzhu Li, PhD student and lead author from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
“By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings,” Li added.
The team used a KUKA robot arm with a special tactile sensor called GelSight, designed by another group at MIT.