Sunday January 21, 2018
Home Lead Story With the aid ...

With the aid of Twitter and AI, researchers to develop flood warning system

In a study, published in the journal Computers & Geosciences, the researchers showed how AI can be used to extract data from Twitter and crowdsourced information from mobile phone apps to build up hyper-resolution monitoring of urban flooding.

0
//
23
AI can play a key role in future flood warning and monitoring systems
AI can play a key role in future flood warning and monitoring systems
Republish
Reprint

London, Dec 26: Researchers are combining Twitter, citizen science and artificial intelligence (AI) techniques to develop an early-warning system for flood-prone communities in urban areas.

In a study, published in the journal Computers & Geosciences, the researchers showed how AI can be used to extract data from Twitter and crowdsourced information from mobile phone apps to build up hyper-resolution monitoring of urban flooding.

“By combining social media, citizen science and artificial intelligence in urban flooding research, we hope to generate accurate predictions and provide warnings days in advance,” said Roger Wang from University of Dundee in Britain.

Urban flooding is difficult to monitor due to complexities in data collection and processing.

This prevents detailed risk analysis, flooding control and the validation of numerical models.

The research team set about trying to solve this problem by exploring how the latest AI technology can be used to mine social media and apps for the data that users provide.

They found that social media and crowdsourcing can be used to complement datasets based on traditional remote sensing and witness reports.

Applying these methods in case studies, they found them to be genuinely informative and that AI can play a key role in future flood warning and monitoring systems.

“The present recording systems — remote satellite sensors, a local sensor network, witness statements and insurance reports — all have their disadvantages. Therefore, we were forced to think outside the box and one of the things that occurred to us was how Twitter users provide real-time commentary on floods,” Wang said.

“A tweet can be very informative in terms of flooding data. Key words were our first filter, then we used natural language processing to find out more about severity, location and other information,” Wang said.

The researchers applied computer vision techniques to the data collected from MyCoast, a crowdsourcing app, to automatically identify scenes of flooding from the images that users post.

“We found these big data-based flood monitoring approaches can definitely complement the existing means of data collection and demonstrate great promise for improving monitoring and warnings in future,” Wang said.

Twitter data was streamed over a one-month period in 2015, with the filtering keywords of “flood”, “inundation”, “dam”, “dike”, and “levee”. More than 7,500 tweets were analysed over this time.

“We have reached the point of 70 per cent accuracy and we are using the thousands of images available on MyCoast to further improve this,” Wang said.

Click here for reuse options!
Copyright 2017 NewsGram

Next Story

A bot that can sketch like human? Microsoft is developing one!

Each image contains details that are absent from the text descriptions, indicating that this AI contains an artificial imagination

0
//
15
Microsoft is developing a bot with the Artificial Intelligence technology that can sketch like humans. Pixabay
Microsoft is developing a bot with the Artificial Intelligence technology that can sketch like humans. Pixabay
  • Microsoft is developing a bot which can draw what you want through Artificial Intelligence technology
  • These pictures will be created by the computer from scratch, pixel by pixel
  • Currently, the technology is imperfect but the researchers are looking forward to develop a model through which help humans and bots to interact with each other

Microsoft is developing a bot that can draw what you want it to by leveraging Artificial Intelligence (AI) technology — programmed to pay close attention to individual words when generating images from caption-like text descriptions.

The technology, which the researchers simply call the drawing bot, can generate images of everything from ordinary pastoral scenes — such as grazing livestock — to the absurd and a floating double-decker bus.

ALSO READ: Microsoft to offer cloud services to Indian start-ups

Each image contains details that are absent from the text descriptions, indicating that this AI contains an artificial imagination.

For now, the technology is imperfect. Pixabay
For now, the technology is imperfect. Pixabay

“If you go to Bing and you search for a bird, you get a bird picture. But here, the pictures are created by the computer, pixel by pixel, from scratch. These birds may not exist in the real world — they are just an aspect of our computer’s imagination of birds,” Xiaodong He from Microsoft’s research lab in a blog post late on Thursday.

According to results on an industry standard test, reported in a research paper posted on arXiv.org, the bot produced a nearly three-fold boost in image quality compared to the previous state-of-the-art technique for text-to-image generation.

The core of this bot is a technology known as a "Generative Adversarial Network" or GAN. Pixabay
The core of this bot is a technology known as a “Generative Adversarial Network” or GAN. Pixabay

The network consists of two Machine Learning models — one that generates images from text descriptions and another, known as a discriminator, that uses text descriptions to judge the authenticity of generated images.

ALSO READ: Microsoft slashes 7,800 jobs, mostly in phones unit

The researchers said that text-to-image generation technology could find practical applications acting as a sort of sketch assistant to painters and interior designers or as a tool for voice-activated photo refinement.

“For AI and humans to live in the same world, they have to have a way to interact with each other. The language and vision are the two most important modalities for humans and machines to interact with each other,” The blog post explained. (IANS)