Sunday February 23, 2020
Home Lead Story Many Security...

Many Security Flaws in Apple Safari Browser: Google

Google discovers security flaws in Apple Safari browser

0
//
Google
Google security researchers discovered several security flaws in a privacy software in Apple web browser Safari. Pixabay

Google security researchers discovered several security flaws in a privacy software in Apple web browser Safari that could have helped third-party vendors track users’ browsing habits.

According to a report in the Financial Times which cited a soon-to-be published paper from Google’s ‘Project Zero’ team, the vulnerabilities were found in an anti-tracking feature known as ‘Intelligent Tracking Prevention’.

Once disclosed by Google researchers to Apple in August last year, the Cupertino-based iPhone maker immediately patched the flaws.

Apple launched the ‘Intelligent Tracking Prevention’ tool in 2017 to, in fact, protect Safari users from being tracked around the web by advertisers and other third-party cookies.

According to Google researchers, the vulnerabilities left personal data of Safari users exposed. They also found a flaw that allowed hackers to “create a persistent fingerprint that will follow the user around the web”.

Google
This is the third time Google researchers have found flaws in the Apple ecosystem. Pixabay

Apple confirmed it patched the issues.

This is the third time Google researchers have found flaws in the Apple ecosystem.

In September, Apple slammed Google for creating a false impression about its iPhones being at hacking risk owing to security flaws that allegedly let several malicious websites break into its iOS operating system.

Researchers at ‘Project Zero’ team had discovered several hacked websites that allegedly used security flaws in iPhones to attack users who visited these websites — compromising their personal files, messages, and real-time location data.

In a statement, Apple said the so-called sophisticated attack was narrowly focused, not a broad-based exploit of iPhones “en masse” as described.

According to Google, the websites delivered their malware indiscriminately and were operational for years.

Apple said that it fixed the vulnerabilities in question — working extremely quickly to resolve the issue just 10 days after it learnt about it.

In July last year, the ‘Project Zero’ team found six critical flaws in Apple iMessage that can compromise the user’s phone without even interacting with them. These security vulnerabilities fell into the ‘interactionless’ category.

Also Read- Snake is the Most Probable Wildlife Animal Reservoir of Novel Coronavirus: Study

Two members of ‘Project Zero’, Google’s elite bug-hunting team, published details and demo proof-of-concept code for five of six ‘interactionless’ security bugs that impact the iOS operating system and can be exploited via the iMessage client. All the six security bugs were patched with the iPhone maker’s iOS 12.4 release. (IANS)

Next Story

Here’s Why Information Overload May Not be Good

The study, published in the journal Cognitive Research: Principles and Implications, may help reframe the idea of how we use the mountain of data extracted from Artificial Intelligence (AI) and Machine Learning (ML) algorithms

0
Information
In situations where people do not have background knowledge, they become more confident with the new information and make better decisions. Pixabay

Information overload may not always be a good thing. Researchers have found that in certain circumstances, having more background information may actually lead people to take worse decisions.

The study, published in the journal Cognitive Research: Principles and Implications, may help reframe the idea of how we use the mountain of data extracted from Artificial Intelligence (AI) and Machine Learning (ML) algorithms and how healthcare professionals and financial advisors present this new information to their patients and clients.

“Being accurate is not enough for information to be useful,” said Samantha Kleinberg, Associate Professor of Computer Science at Stevens Institute of Technology in New Jersey, US.”It’s assumed that AI and Machine Learning will uncover great information, we’ll give it to people and they’ll make good decisions. However, the basic point of the paper is that there is a step missing: we need to help people build upon what they already know and understand how they will use the new information,” Kleinberg added.

For example, when doctors communicate information to patients, such as recommending blood pressure medication or explaining risk factors for diabetes, people may be thinking about the cost of medication or alternative ways to reach the same goal.

“So, if you don’t understand all these other beliefs, it’s really hard to treat them in an effective way,” said Kleinberg. For the study, the researchers asked 4,000 participants a series of questions about topics with which they would have varying degrees of familiarity.

Some participants were asked to make decisions on scenarios they could not possibly be familiar with. Other participants were asked about more familiar topics i.e. choosing how to reduce risk in a retirement portfolio or deciding between specific meals and activities to manage bodyweight.

The team compared whether people did better or worse with new information or were just using what they already knew. The researchers found that prior knowledge got in the way of choosing the best outcome. Kleinberg found the same to be true when she posed a problem about health and exercise, as it relates to diabetes.

When people without diabetes read the problem, they treated the new information at face value, believed it and used it successfully. People with diabetes, however, started second-guessing what they knew and as in the previous example, did much worse. “In situations where people do not have background knowledge, they become more confident with the new information and make better decisions,” said Kleinberg.

AI
The study, published in the journal Cognitive Research: Principles and Implications, may help reframe the idea of how we use the mountain of data extracted from Artificial Intelligence (AI) and Machine Learning (ML) algorithms and how healthcare professionals and financial advisors present this new information to their patients and clients. Pixabay

“So there’s a big difference in how we interpret the information we are given and how it affects our decision making when it relates to things we already know vs. when it’s in a new or unfamiliar setting,” she added.

Kleinberg cautioned that the point of the paper is not that information is bad. She argued only that in order to help people make better decisions, it is important to better understand what people already know and tailor information based on that mental model.

ALSO READ: Here’s Why Surgery Would be an Effective Way To Overcome Obesity at an Early Age

Started in 1870, Stevens Institute of Technology is one of the oldest technological institutes in the US. (IANS)