Wednesday June 19, 2019

Researchers Identify New Mechanism to Prevent Alzheimer’s

The team next plans to test this approach in additional animal studies and eventually in human trials using small molecule inhibitors targeting eEF2K

0
//
In Alzheimer's disease, patients start losing memory. Pixabay

Researchers have identified a novel mechanism and a potential new therapeutic target for Alzheimer’s disease (AD), says a new study on mice.

Alzheimer’s is characterised by profound memory loss and synaptic failure. Although the exact cause of the disease remains unclear, it is well established that maintaining memory and synaptic plasticity requires protein synthesis.

The function of the synapse is to transfer electric activity (information) from one cell to another.

“Alzheimer’s is such a devastating disease and currently there is no cure or effective therapy for it,” said Tao Ma, Assistant Professor at Wake Forest School of Medicine in the US.

A lady suffering from Alzheimer’s. Flickr

“All completed clinical trials of new drugs have failed, so there is clearly a need for novel therapeutic targets for potential treatments.”

For the study, the team has shown that AD-associated activation of a signaling molecule termed eEF2K leads to inhibition of protein synthesis.

Further, they wanted to determine if suppression of eEF2K could improve protein synthesis capacity, consequently alleviating the cognitive and synaptic impairments associated with the disease.

They used a genetic approach to repress the activity of eEF2K in Alzheimer’s mouse models.

Cognitive Impairment
Alzheimer’s disease patient Isidora Tomaz, 82, sits in an armchair in her house in Lisbon, Portugal. VOA

The findings, published in the Journal of Clinical Investigation, showed that genetic suppression of eEF2K prevented memory loss in those animal models and significantly improved synaptic function.

Also Read- Global Warming Could Change US Cities’ Climate by 2080- Study

“These findings are encouraging and provide a new pathway for further research,” said Ma.

The team next plans to test this approach in additional animal studies and eventually in human trials using small molecule inhibitors targeting eEF2K. (IANS)

Next Story

Researchers Teaching Artificial Intelligence to Connect Senses Like Vision and Touch

The new AI-based system can create realistic tactile signals from visual inputs

0
Tool, Humans, Robots
Members of that same MIT team applied the new algorithm to the BMW factory floor experiments and found that instead of freezing in place, the robot simply rolled on . Pixabay

A team of researchers at the Massachusetts Institute of Technology (MIT) have come up with a predictive Artificial Intelligence (AI) that can learn to see by touching and to feel by seeing.

While our sense of touch gives us capabilities to feel the physical world, our eyes help us understand the full picture of these tactile signals.

Robots, however, that have been programmed to see or feel can’t use these signals quite as interchangeably.

The new AI-based system can create realistic tactile signals from visual inputs, and predict which object and what part is being touched directly from those tactile inputs.

Teaching, Artificial Intelligence, Researchers
) A team of researchers at the Massachusetts Institute of Technology (MIT) have come up with a predictive Artificial Intelligence (AI). Pixabay

In the future, this could help with a more harmonious relationship between vision and robotics, especially for object recognition, grasping, better scene understanding and helping with seamless human-robot integration in an assistive or manufacturing setting.

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, said Yunzhu Li, PhD student and lead author from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

“By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings,” Li added.

The team used a KUKA robot arm with a special tactile sensor called GelSight, designed by another group at MIT.

Also Read- G20 Environment Ministers Agree to Tackle Marine Plastic Waste

Using a simple web camera, the team recorded nearly 200 objects, such as tools, household products, fabrics, and more, being touched more than 12,000 times.

Breaking those 12,000 video clips down into static frames, the team compiled “VisGel,” a dataset of more than three million visual/tactile-paired images.

“Bringing these two senses (vision and touch) together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects,” said Li.

The current dataset only has examples of interactions in a controlled environment.

Teaching, Artificial Intelligence, Researchers
While our sense of touch gives us capabilities to feel the physical world, our eyes help us understand the full picture of these tactile signals. Pixabay

The team hopes to improve this by collecting data in more unstructured areas, or by using a new MIT-designed tactile glove, to better increase the size and diversity of the dataset.

“This is the first method that can convincingly translate between visual and touch signals”, said Andrew Owens, a post-doc at the University of California at Berkeley.

Also Read- Scholarship Scam: How Officials, Institutions, Banks Deprive Poor Students to Pursue Basic Education?

The team is set to present the findings next week at the “Conference on Computer Vision and Pattern Recognition” in Long Beach, California. (IANS)