Wednesday July 18, 2018
Home Lead Story Artificial In...

Artificial Intelligence May Aid Solving ‘Global Hunger’

0
//
77
ai
Subsistence farmer Joice Chimedza harvests maize on her small plot in Norton, a farming area outside Zimbabwe's capital, Harare, May 10, 2016. VOA
Republish
Reprint

Despite a global abundance of food, a United Nations report says 815 million people, 11 percent of the world’s population, went hungry in 2016. That number seems to be rising.

Poverty is not the only reason, however, people are experiencing food insecurity.

“Increasingly we’re also seeing hunger caused by the displacement related to conflict, natural disaster as well, but particularly there’s been an uptick in the number of people displaced in the world,” said Robert Opp, director of Innovation and Change Management at the United Nations World Food Program.

ALSO READ: ‘Artificial Intelligence yet to make its mark in India’ 

ai
Humanitarian organizations are turning to new technologies such as AI, or artificial intelligence, to fight global food insecurity. Pixabay

“What AI offers us right now, is an ability to augment human capacity. So, we’re not talking about replacing human beings and things. We’re talking about doing more things and doing them better than we could by just human capacity alone,” Opp said.

Analyze data, get it to farmers

Artificial intelligence can analyze large amounts of data to locate areas affected by conflict and natural disasters and assist farmers in developing countries. The data can then be accessed by farmers from their smartphones.

“The average smartphone that exists in the world today is more powerful than the entire Apollo space program 50 years ago. So just imagine a farmer in Africa who has a smartphone has much more computing power than the entire Apollo space program,” said Pranav Khaitan, engineering lead at Google AI.

“When you take your special data and soil mapping data and use AI to do the analysis, you can send me the information. So in a nutshell, you can help me [know] when to plant, what to plant, how to plant,” said Uyi Stewart, director of Strategy Data and Analytics in Global Development of the Bill and Melinda Gates Foundation.

“When you start combining technologies, AI, robotics, sensors, that’s when we see magic start to happen on farms for production, to increase crop yields,” said Zenia Tata, vice president for Global Impact Strategy at XPRIZE, an organization that creates incentivized competitions so innovative ideas and technologies can be developed to benefit humanity.

“It all comes down to developing these techniques and making it available to these farmers and people on the ground,” Khaitan said.

ai
However, the developing world is often the last to get new technologies. Pixabay

Breaking down barriers

As Stewart said, “815 million people are hungry and I can bet you that nearly 814 million out of the 815 million do not have a smartphone.”

Even when the technology is available, other barriers still exist.

“A lot of these people that we talked about that are hungry, they don’t speak English, so when we get insights out of this technology how are we going to pass it onto them?” Stewart said.

ALSO READ: Stephen Hawking believes Technology could end Poverty and Disease, says Artificial Intelligence could be the Worst or Best things for Humanity

ai
While it may take time for new technologies to reach the developing world, many hope such advances will ultimately trickle-down to farmers in regions that face food insecurity. Pixabay

“You’ve invented the technology. The big investments have gone in. Now you’re modifying it, which brings the cost down as well,” said Teddy Bekele, vice president of Ag Technology at U.S.-based agribusiness and food company Land O’Lakes.

“So, I think three to four years maybe we’ll have some of the things we have here to be used there [in the developing world] as well,” Bekele predicted.

Those who work in humanitarian organizations said entrepreneurs must look outside their own countries to adapt the new technologies to combat global hunger, or come up with a private, public model. Farmers will need the tools and training so they can harness the power of artificial intelligence to help feed the hungry in the developing world. VOA

Click here for reuse options!
Copyright 2018 NewsGram

Next Story

AI To Recognize Individuals Emotions Using A Photographic Repository

Not for police, government

0
Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm Affectiva, is pictured in Boston, April 23, 2018. Affectiva builds face-scanning technology for detecting emotions, but its founders decline business opportunities that involve spying on people.
Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm Affectiva, is pictured in Boston, April 23, 2018. Affectiva builds face-scanning technology for detecting emotions, but its founders decline business opportunities that involve spying on people. VOA

When a CIA-backed venture capital fund took an interest in Rana el Kaliouby’s face-scanning technology for detecting emotions, the computer scientist and her colleagues did some soul-searching — and then turned down the money.

“We’re not interested in applications where you’re spying on people,” said el Kaliouby, the CEO and co-founder of the Boston startup Affectiva. The company has trained its artificial intelligence systems to recognize if individuals are happy or sad, tired or angry, using a photographic repository of more than 6 million faces.

Recent advances in AI-powered computer vision have accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. But as these prying AI “eyes” find new applications in store checkout lines, police body cameras and war zones, the tech companies developing them are struggling to balance business opportunities with difficult moral decisions that could turn off customers or their own workers.

El Kaliouby said it’s not hard to imagine using real-time face recognition to pick up on dishonesty — or, in the hands of an authoritarian regime, to monitor reaction to political speech in order to root out dissent. But the small firm, which spun off from a Massachusetts Institute of Technology research lab, has set limits on what it will do.

The company has shunned “any security, airport, even lie-detection stuff,” el Kaliouby said. Instead, Affectiva has partnered with automakers trying to help tired-looking drivers stay awake, and with consumer brands that want to know whether people respond to a product with joy or disgust.

Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm Affectiva, demonstrates the company's facial recognition technology, in Boston, April 23, 2018.
Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm Affectiva, demonstrates the company’s facial recognition technology, in Boston, April 23, 2018. VOA

New qualms

Such queasiness reflects new qualms about the capabilities and possible abuses of all-seeing, always-watching AI camera systems — even as authorities are growing more eager to use them.

In the immediate aftermath of Thursday’s deadly shooting at a newspaper in Annapolis, Maryland, police said they turned to face recognition to identify the uncooperative suspect. They did so by tapping a state database that includes mug shots of past arrestees and, more controversially, everyone who registered for a Maryland driver’s license.

Initial information given to law enforcement authorities said that police had turned to facial recognition because the suspect had damaged his fingerprints in an apparent attempt to avoid identification. That report turned out to be incorrect and police said they used facial recognition because of delays in getting fingerprint identification.

In June, Orlando International Airport announced plans to require face-identification scans of passengers on all arriving and departing international flights by the end of this year. Several other U.S. airports have already been using such scans for some departing international flights.

Chinese firms and municipalities are already using intelligent cameras to shame jaywalkers in real time and to surveil ethnic minorities, subjecting some to detention and political indoctrination. Closer to home, the overhead cameras and sensors in Amazon’s new cashier-less store in Seattle aim to make shoplifting obsolete by tracking every item shoppers pick up and put back down.

Concerns over the technology can shake even the largest tech firms. Google, for instance, recently said it will exit a defense contract after employees protested the military application of the company’s AI technology. The work involved computer analysis of drone video footage from Iraq and other conflict zones.

Google guidelines

Similar concerns about government contracts have stirred up internal discord at Amazon and Microsoft. Google has since published AI guidelines emphasizing uses that are “socially beneficial” and that avoid “unfair bias.”

Amazon, however, has so far deflected growing pressure from employees and privacy advocates to halt Rekognition, a powerful face-recognition tool it sells to police departments and other government agencies.

Saying no to some work, of course, usually means someone else will do it. The drone-footage project involving Google, dubbed Project Maven, aimed to speed the job of looking for “patterns of life, things that are suspicious, indications of potential attacks,” said Robert Work, a former top Pentagon official who launched the project in 2017.

While it hurts to lose Google because they are “very, very good at it,” Work said, other companies will continue those efforts.

Commercial and government interest in computer vision has exploded since breakthroughs earlier in this decade using a brain-like “neural network” to recognize objects in images. Training computers to identify cats in YouTube videos was an early challenge in 2012. Now, Google has a smartphone app that can tell you which breed.

A major research meeting — the annual Conference on Computer Vision and Pattern Recognition, held in Salt Lake City in June — has transformed from a sleepy academic gathering of “nerdy people” to a gold rush business expo attracting big companies and government agencies, said Michael Brown, a computer scientist at Toronto’s York University and a conference organizer.

Brown said researchers have been offered high-paying jobs on the spot. But few of the thousands of technical papers submitted to the meeting address broader public concerns about privacy, bias or other ethical dilemmas. “We’re probably not having as much discussion as we should,” he said.

Not for police, government

Startups are forging their own paths. Brian Brackeen, the CEO of Miami-based facial recognition software company Kairos, has set a blanket policy against selling the technology to law enforcement or for government surveillance, arguing in a recent essay that it “opens the door for gross misconduct by the morally corrupt.”

Boston-based startup Neurala, by contrast, is building software for Motorola that will help police-worn body cameras find a person in a crowd based on what they’re wearing and what they look like. CEO Max Versace said that “AI is a mirror of the society,” so the company chooses only principled partners.

Also read: Thanks To Artificial Intelligence, Radio Journalist Regains His Voice

“We are not part of that totalitarian, Orwellian scheme,” he said. (VOA)