Sydney, Nov 9: Facebook is testing a new method to stop revenge porn that requires you to send your own nudes to yourself via the social network’s Messenger app.
This strategy would help Facebook to create a digital fingerprint for the picture and mark it as non-consensual explicit media.
So if a relationship goes sour, you could take proactive steps to prevent any intimate images in possession of your former love interest from being shared widely on Facebook or instagram.
Facebook is partnering with a Australian government agency to prevent such image-based abuses, the Australia Broadcasting Corp reported.
If you’re worried your intimate photos will end up on Instagram or Facebook, you can get in contact with Australi’s e-Safety Commissioner. They might then tell you to send your own nudes to yourself on Messenger.
“It would be like sending yourself your image in email, but obviously this is a much safer, secure end-to-end way of sending the image without sending it through the ether,” e-Safety Commissioner Julie Inman Grant told ABC.
Once the image is sent via Messenger, Facebook would use technology to “hash” it, which means creating a digital fingerprint or link.
“They’re not storing the image, they’re storing the link and using artificial intelligence and other photo-matching technologies,” Grant said.
“So if somebody tried to upload that same image, which would have the same digital footprint or hash value, it will be prevented from being uploaded,” she explained.
Australia is one of four countries taking part in the “industry-first” pilot which uses “cutting-edge technology” to prevent the re-sharing on images on its platforms, Facebook’s Head of Global Safety Antigone Davis was quoted as saying.
“The safety and wellbeing of the Facebook community is our top priority,” Davis said. (IANS)
Facebook announced several new hires of top academics in the field of artificial intelligence Tuesday, among them a roboticist known for her work at Disney making animated figures move in more human-like ways.
The hires raise a big question — why is Facebook interested in robots, anyway?
It’s not as though the social media giant is suddenly interested in developing mechanical friends, although it does use robotic arms in some of its data centers. The answer is even more central to the problem of how AI systems work today.
Today, most successful AI systems have to be exposed to millions of data points labeled by humans — like, say, photos of cats — before they can learn to recognize patterns that people take for granted. Similarly, game-playing bots like Google’s computerized Go master AlphaGo Zero require tens of thousands of trials to learn the best moves from their failures.
Creating systems that require less data and have more common sense is a key goal for making AI smarter in the future.
“Clearly we’re missing something in terms of how humans can learn so fast,” Yann LeCun, Facebook’s chief AI scientist, said in a call with reporters last week. “So far the best ideas have come out of robotics.”
Among the people Facebook is hiring are Jessica Hodgins , the former Disney researcher; and Abhinav Gupta, her colleague at Carnegie Mellon University who is known for using robot arms to learn how to grasp things.
Pieter Abbeel, a roboticist at University of California, Berkeley and co-founder of the robot-training company Covariant.ai, says the robotics field has benefits and constraints that push progress in AI. For one, the real world is naturally complex, so robotic AI systems have to deal with unexpected, rare events. And real-world constraints like a lack of time and the cost of keeping machinery moving push researchers to solve difficult problems.
“Robotics forces you into many reality checks,” Abbeel said. “How good are these algorithms, really?”
There are other more abstract applications of learnings from robotics, says Berkeley AI professor Ken Goldberg. Just like teaching a robot to escape from a computerized maze, other robots change their behavior depending on whether actions they took got them closer to a goal. Such systems could even be adapted to serve ads, he said — which just happens to be the mainstay of Facebook’s business.
“It’s not a static decision, it’s a dynamic one,” Goldberg said.
In an interview, Hodgins expressed an interest in a wide range of robotics research, everything from building a “compelling humanoid robot” to creating a mechanical servant to “load and unload my dishwasher.”
While she acknowledged the need to imbue robots with more common sense and have them learn with fewer examples, she also said her work in animation could lead to a new form of sharing — one in which AI-powered tools could help one show off a work of pottery in 3-D, for example.
“One thing I hope we’ll be able to do is explore AI support for creativity,” she said.
For Facebook, planting a flag in the hot field also allows it to be competitive for AI talent emerging from universities, Facebook’s LeCun said.
Bart Selman, a Cornell computer science professor AI expert, said it’s a good idea for Facebook to broaden its reach in AI and take on projects that might not be directly related to the company’s business — something that’s a little more “exciting” — the way Google did with self-driving cars, for example.
This attracts not just attention, but students, too. The broader the research agenda, the better the labs become, he said. (VOA)