Have you created a ShazzleMail account on your smartphone? This is a required first step.

Yes No

Free Encrypted Email

Posts Tagged ‘security’

venmo_pub_priv

SECURITY NEWS THIS WEEK: MAYBE GO AHEAD AND MAKE YOUR VENMO PRIVATE

July 25, 2018

THIS WEEK STARTED with a controversial, widely derided meeting between President Trump and Russian leader Vladimir Putin, and ended with… an invite for round two! And yes, all manner of craziness managed to happen in between.

That includes yet more denials on Trump’s part that Russia interfered—and continues to—with US democracy, a stance that has serious repercussions, however many times he walks it back. The Putin press conference performance also prompted concern across the aisle, as senators Marco Rubio and Mark Warner cast it as a major setback in efforts to safeguard the election. For what it’s worth, here’s what special counsel Robert Mueller’s been up to lately, and where he’ll likely go next.

The week wasn’t a total Trumpapalooza. RealNetworks offered a new facial recognition tool to schools for free, introducing a host of privacy-related concerns. And a company called Elucd is helping police better gauge how their precincts feel about them by pushing surveys out through apps.

Good news could be found as well! We talked to the Google engineers who built Secure Browsing, a suite of technologies that underpin security for a huge amount of the modern web. We profiled Jonathan Albright, the academic who has shined the brightest spotlight on Russian influence campaigns in the 2016 election and beyond. And we took a look at two tools Amazon has tested that could help its leaky cloud problem.

There’s more! As always, we’ve rounded up all the news we didn’t break or cover in depth this week. Click on the headlines to read the full stories. And stay safe out there.

Venmo’s Public Defaults Start to Cause Problems
Privacy advocate and designer Hang Do Thi Duc this week brought attention to payment app Venmo’s lack of built-in privacy. Her site, Public by Default, taps into Venmo’s API to show the latest transactions taking place on the platform. In fact, the nearly 208 million public Venmo transactions that took place in 2017 can all be viewed at this URL. But while Public by Default explores the inherent privacy issues with Venmo’s opt-in privacy in largely anonymized fashion, a bot emerged Thursday that tweets the usernames and photos of any users that appear to be buying drugs. Not ideal!

Ideally, Venmo would go ahead and make transactions private by default. But because it’s structured as something of a social network—peeping other people’s emoji transaction descriptions is part of the appeal—that’s unfortunately unlikely. Instead, to better protect yourself, open the app, tap the hamburger menu in the upper left corner, tap Privacy, and select Private. You’re welcome!

The DOJ Will Make Foreign Interference Public
In a departure from current policy, deputy attorney general Rod Rosenstein Thursday said that the government will let American groups and individuals know when they are the subject of an effort to subvert US democracy. The Obama administration notably didn’t do so in 2016, fearing that going public with Russia’s actions would appear politically motivated. It’s unclear exactly how the new policy will play out in practice, given that those sorts of disclosures will require a “high confidence” in attribution—tricky, especially in the digital sphere—and that the DOJ presumably won’t make any disclosures that would threaten ongoing investigations. Still, it would at least presumably prevent the current administration from trying to downplay or cover up any intrusions in the 2018 midterms and 2020 presidential campaigns.

Ransomware Attacks Plague Medical Companies
A pair of high-profile attacks hit sensitive health care targets this week. Ontario-based CarePartners got hit with ransomware that locked out medical histories and contact info for as many as tens of thousands of patients, and apparently credit card numbers and other sensitive information as well. And the same SamSam malware that hobbled Atlanta struck LabCorp, a major lab services provider. Hackers apparently demanded $52,500 to free up the affected machines, but LabCorp appears inclined to simply replace them instead. Either way, it’s a good reminder that ransomware targets hospitals and other health care targets disproportionally, precisely because the stakes are so much higher.

A Robocall Firm Exposed Data of Thousands of US Voters
As if the scourge of robocalls weren’t bad enough already, a company called Robocent left hundreds of thousands of voter records, spread across 2,600 files, exposed on the open web. The data appears to have comprised mostly addresses and demographic information, but if nothing else it’s a reminder that the cloud needs better tools to keep this sort of thing from happening basically every week.

Tags: , ,

imrs

SECURITY NEWS THIS WEEK: CARRIERS STOP SELLING LOCATION DATA IN A RARE PRIVACY WIN

June 26, 2018

WHAT’S THAT? A week with nearly as much good news as bad in the world of privacy and security? It’s true! Especially the privacy part.

On Friday, the Supreme Court issued a hotly anticipated ruling in Carpenter v. United States, establishing that the government will need to get a warrant if it wants to track your location with cell sites. Meanwhile in California, it looks like residents might soon benefit from a privacy law that grants unprecedented power—in the US, anyway—over what data companies collect and what they do with it. And while this isn’t privacy related, strictly speaking, Apple’s new partnership with startup RapidSOS will push iPhone owners’ locations to dispatchers during 911 calls, saving first responders valuable minutes and almost certainly saving lives.

It’s not all sunshine and lollipops, of course. The same hacker group that meddled with the PyeongChang Olympics appears to be back, this time swinging at biochem labs in Europe. The hacking threat from China has escalated in step with trade war rhetoric. Pretty much every streaming device is vulnerable to the same type of DNS rebinding attack. Iran’s ban of encrypted messaging app Telegram has had a serious, layered impact on the country’s citizens. And deep fakes will make the already complicated issue of Twitter mob justice even more so.

But wait, there’s more! As always, we’ve rounded up all the news we didn’t break or cover in depth this week. Click on the headlines to read the full stories. And stay safe out there.

The Major Mobile Carriers Stop Selling Location Information
After a public blow-up around the sharing of location data with third parties—and pressure from senator Ron Wyden—all four major US carriers have pledged to stop the practice. The change won’t happen overnight; all of these companies have long-term contracts to unwind. But it’s a rare bit of good privacy news at a time when that has seemed increasingly hard to come by.

Alleged Vault 7 Leaker Indicted
Former CIA employee Joshua Adam Schulte was indicted this week; authorities allege that he was responsible for the devastating Vault 7 leak that revealed many of the agency’s hacking secrets. Schulte had previously been held on child pornography charges. The indictment also alleges that Schulte had surprisingly lax security practices for a CIA vet; he apparently reused a less secure password from his cell phone to protect the encrypted materials on his computer as well. He faces up to 135 years in prison.

VirusTotal Monitor Should Help Keep Apps From Getting Flagged as Malware
In 2012, Google acquired VirusTotal, a site that scans online malware and viruses. This week, it announced a new spinoff product, VirusTotal Monitor, that will help app developers avoid being accidentally flagged as malware. VirusTotal already aggregates what over 70 antivirus vendors consider malware, so devs can how compare their apps against that list for a little peace of mind.

Google Makes It Easier to Check Your Privacy and Security
While not exactly offering you higher levels of security, the new Google Account panel on Android—to be followed later on iOS and desktop—does make it easier to see exactly what your settings are, along with a “privacy checkup” and “security setup” that nudge you toward a more locked-down online experience. It also introduces a search function to make it easier to find whatever specific aspect of your account you want to vet.

Tags: ,

android-png-cf

Tech Giant Intel Partners With DApp Platform Enigma on Privacy Research

June 21, 2018

Decentralized application (DApp) platform Enigma will partner with Intel on privacy research as it prepares to launch its blockchain testnet, the two companies confirmed June 20.

Enigma, which completed a $45 mln Initial Coin Offering (ICO) in September of last year, said the collaboration would focus on “research and development efforts to advance development of privacy preserving computation technologies.”

The platform aims to provide the first environment for scalable end-to-end DApps using bespoke privacy technology to protect data “while still allowing computation” on top of it.

“Enigma is excited to continue collaborating with Intel to advance our protocol and privacy technologies for public blockchains, as well as expanding and strengthening our working relationship,” the post adds, hinting further partnership details would follow.

Ahead of Intel plugging Enigma’s privacy developments at the Cyber Week 2018 event in Tel Aviv next this week, Rick Echevarria, vice president of the corporation’s software and services group and general manager, platforms security division, appeared likewise upbeat at the prospect of improving that area of blockchain.

“Security is pivotal to our company’s strategy and a fundamental underpinning for all workloads, especially those that are as data-centric as AI and blockchain,” he wrote in a separate post from Intel, continuing:

“We will continue to innovate and make our silicon an active participant in the threat defense lifecycle.”

The move marks a further step in Intel’s blockchain involvement, this already spanning multiple industries, including healthcare, and partnerships, such as with virtual currency hardware firm Ledger.

Tags: , ,

SAN FRANCISCO - OCTOBER 24:  Dustin Moskovitz, co-founder of Facebook, delivers his keynote address at the CTIA WIRELESS I.T. & Entertainment 2007 conference October 24, 2007 in San Francisco, California. The confernence is showcasing the lastest in mobile technology and will run through October 25.  (Photo by Kimberly White/Getty Images)

Google and Facebook are watching our every move online. It’s time to make them stop

January 31, 2018

Facebook CEO Mark Zuckerberg, left, and Google CEO Larry Page
To make any real progress in advancing data privacy this year, we have to start doing something about Google and Facebook. Not doing so would be like trying to lose weight without changing your diet. Simply ineffective.

The impact these two companies have on our privacy cannot be understated. You may know that hidden trackers lurk on most websites you visit, soaking up your personal information.

What you may not realize, though, is 76 percent of websites now contain hidden Google trackers, and 24 percent have hidden Facebook trackers, according to the Princeton Web Transparency & Accountability Project. The next highest is Twitter with 12 percent. It is likely that Google or Facebook are watching you on many sites you visit, in addition to tracking you when using their products.

As a result, these two companies have amassed huge data profiles on each person, which can include your interests, purchases, search, browsing and location history, and much more. They then make your sensitive data profile available for invasive targeted advertising that can follow you around the Internet.

This advertising system is designed to enable hyper-targeting, which has many unintended consequences, such as the ability for bad actors to use the system to influence the most susceptible or to exclude groups in a way that facilitates discrimination.

“These two companies have amassed huge data profiles on each person, which can include your interests, purchases, search, browsing and location history, and much more.”
Because of their entrenched positions in a wide array of Internet services, each collecting personal information that together combine into these massive digital profiles, Google and Facebook can offer hyper-targeting much better than the competition.

As a result, they now make up 63 percent of all digital advertising, and accounted for 74 percent of this market’s growth in 2017, according to eMarketer. Together they form a tight digital advertising duopoly, showing no signs of abating.

Google and Facebook also use your data as input for increasingly sophisticated AI algorithms that put you in a filter bubble — an alternate digital universe that controls what you see in their products, based on what their algorithms think you are most likely to click on.

These echo chambers distort people’s reality, creating a myriad of unintended consequences such as increasing societal polarization. On their unending march to profit from more and more personal information, Google and Facebook have shown little regard for all the negative consequences of their runaway algorithms.

So how do we move forward from here?

Don’t be fooled by claims of self-regulation, as any useful long-term reforms of Google and Facebook’s data privacy practices fundamentally oppose their core business models: hyper-targeted advertising based on more and more intrusive personal surveillance. Change must come from the outside.

Unfortunately, we’ve seen relatively little from Washington. Congress and federal agencies need to take a fresh look at what can be done to curb these data monopolies. They first need to demand more algorithmic and privacy policy transparency, so people can truly understand the extent of how their personal information is being collected, processed and used by these companies. Only then can informed consent be possible.

They also need to legislate that people own their own data, enabling real opt-outs. Finally, they need to restrict how data can be combined including being more aggressive at blocking acquisitions that further consolidate data power, which will pave the way for more competition in digital advertising.

Until we see such meaningful changes, consumers should vote with their feet. DuckDuckGo found that about a quarter of American adults are already taking significant actions to take back their privacy. Helping in this effort are seamless browser add-ons that will block Google and Facebook’s hidden trackers across the Internet, as well as more private alternatives to their core services. I can say from my own experience, you can indeed live Google and Facebook free.

If we do nothing about Google and Facebook, we will get more of the same: more hyper-targeting, more algorithmic bias, less competition and the further erosion of collateral industries, like media. Enough is enough.

The complete loss of personal privacy in the Internet age is not inevitable. Through thoughtful regulation and increased consumer choice, we can choose a brighter path. I hope to look back at 2018 as a turning point in data privacy, where we awoke to the unacceptable implications of two companies controlling so much of our digital future.

Commentary by Gabriel Weinberg, CEO and founder of DuckDuckGo, which makes online privacy tools, including an alternative search engine to Google. Follow him on Twitter @yegg .

For more insight from CNBC contributors, follow @CNBCopinion on Twitter.

Tags: , ,

screen-shot-2017-09-13-at-2-38-44-pm

Apple responds to Senator Franken’s Face ID privacy concerns

October 17, 2017

Apple has now responded to a letter from U.S. Senator Al Franken last month in which he asked the company to provide more information about the incoming Face ID authentication technology which is baked into its top-of-the-range iPhone X, due to go on sale early next month.

As we’ve previously reported, Face ID raises a range of security and privacy concerns because it encourages smartphone consumers to use a facial biometric for authenticating their identity — and specifically a sophisticated full three dimensional model of their face.

And while the tech is limited to one flagship iPhone for now, with other new iPhones retaining the physical home button plus fingerprint Touch ID biometric combo that Apple launched in 2013, that’s likely to change in future.

After all, Touch ID arrived on a single flagship iPhone before migrating onto additional Apple hardware, including the iPad and Mac. So Face ID will surely also spread to other Apple devices in the coming years.

That means if you’re an iOS user it may be difficult to avoid the tech being baked into your devices. So the Senator is right to be asking questions on behalf of consumers. Even if most of what he’s asking has already been publicly addressed by Apple.

Last month Franken flagged what he dubbed “substantial questions” about how “Face ID will impact iPhone users’ privacy and security, and whether the technology will perform equally well on different groups of people”, asking Apple for “clarity to the millions of Americans who use your products” and how it had weighed privacy and security issues pertaining to the tech itself; and for additional steps taken to protect users.

Here’s the full list of 10 questions the Senator put to the company:

1. Apple has stated that all faceprint data will be stored locally on an individual’s device as opposed to being sent to the cloud.

a. Is it currently possible – either remotely or through physical access to the device – for either Apple or a third party to extract and obtain usable faceprint data from the iPhone X?

b. Is there any foreseeable reason why Apple would decide to begin storing such data remotely?

2. Apple has stated that it used more than one billion images in developing the Face ID algorithm. Where did these one billion face images come from?

3. What steps did Apple take to ensure its system was trained on a diverse set of faces, in terms of race, gender, and age? How is Apple protecting against racial, gender, or age bias in Face ID?

4. In the unveiling of the iPhone X, Apple made numerous assurances about the accuracy and sophistication of Face ID. Please describe again all the steps that Apple has taken to ensure that Face ID can distinguish an individual’s face from a photograph or mask, for example.

5. Apple has stated that is has no plans to allow any third party applications access to the Face ID system or its faceprint data. Can Apple assure its users that it will never share faceprint data, along with the tools or other information necessary to extract the data, with any commercial third party?

6. Can Apple confirm that it currently has no plans to use faceprint data for any purpose other than the operation of Face ID?

7. Should Apple eventually determine that there would be reason to either begin storing faceprint data remotely or use the data for a purpose other than the operation of Face ID, what steps will it take to ensure users are meaningfully informed and in control of their data?

8. In order for Face ID to function and unlock the device, is the facial recognition system “always on,” meaning does Face ID perpetually search for a face to recognize? If so:

a. Will Apple retain, even if only locally, the raw photos of faces that are used to unlock (or attempt to unlock) the device?

b. Will Apple retain, even if only locally, the faceprints of individuals other than the owner of the device?

9. What safeguards has Apple implemented to prevent the unlocking of the iPhone X when an individual other than the owner of the device holds it up to the owner’s face?

10. How will Apple respond to law enforcement requests to access Apple’s faceprint data or the Face ID system itself?

In its response letter, Apple first points the Senator to existing public info — noting it has published a Face ID security white paper and a Knowledge Base article to “explain how we protect our customers’ privacy and keep their data secure”. It adds that this “detailed information” provides answers “all of the questions you raise”.

But also goes on to summarize how Face ID facial biometrics are stored, writing: “Face ID data, including mathematical representations of your face, is encrypted and only available to the Secure Enclave. This data never leaves the device. It is not sent to Apple, nor is it included in device backups. Face images captured during normal unlock operations aren’t saved, but are instead immediately discarded once the mathematical representation is calculated for comparison to the enrolled Face ID data.”

It further specifies in the letter that: “Face ID confirms attention by directing the direction of your gaze, then uses neural networks for matching and anti-spoofing so you can unlock your phone with a glance.”

And reiterates its prior claim that the chance of a random person being able to unlock your phone because their face fooled Face ID is approximately 1 in 1M (vs 1 in 50,000 for the Touch ID tech). After five unsuccessful match attempts a passcode will be required to unlock the device, it further notes.

“Third-party apps can use system provided APIs to ask the user to authenticate using Face ID or a passcode, and apps that support Touch ID automatically support Face ID without any changes. When using Face ID, the app is notified only as to whether the authentication was successful; it cannot access Face ID or the data associated with the enrolled face,” it continues.

On questions about the accessibility of Face ID technology, Apple writes: “The accessibility of the product to people of diverse races and ethnicities was very important to us. Face ID uses facial matching neural networks that we developed using over a billion images, including IR and depth images collected in studies conducted with the participants’ informed consent.”

The company had already made the “billion images” claim during its Face ID presentation last month, although it’s worth noting that it’s not saying — and has never said — it trained the neural networks on images of a billion different people.

Indeed, Apple goes on to tell the Senator that it relied on a “representative group of people” — though it does not confirm exactly how many individuals, writing only that: “We worked with participants from around the world to include a representative group of people accounting for gender, age, ethnicity and other factors. We augmented the studies as needed to provide a high degree of accuracy for a diverse range of users.”

There’s obviously an element of commercial sensitivity at this point, in terms of Apple cloaking its development methods from competitors. So you can understand why it’s not disclosing more exact figures. But of course Face ID’s robustness in the face of diversity remains to be proven (or disproven) when iPhone X devices are out in the wild.

Apple also specifies that it has trained a neural network to “spot and resist spoofing” to defend against attempts to unlock the device with photos or masks. Before concluding the letter with an offer to brief the Senator further if he has more questions.

Notably Apple hasn’t engaged with Senator Franken’s question about responding to law enforcement requests — although given enrolled Face ID data is stored locally on a user’s device in the Secure Element as a mathematical model, the technical architecture of Face ID has been structured to ensure Apple never takes possession of the data — and couldn’t therefore hand over something it does not hold.

The fact Apple’s letter does not literally spell that out is likely down to the issue of law enforcement and data access being rather politically charged.

In his response to the letter, Senator Franken appears satisfied with the initial engagement, though he also says he intends to take the company up on its offer to be briefed in more detail.

“I appreciate Apple’s willingness to engage with my office on these issues, and I’m glad to see the steps that the company has taken to address consumer privacy and security concerns. I plan to follow up with Apple to find out more about how it plans to protect the data of customers who decide to use the latest generation of iPhone’s facial recognition technology,” he writes.

“As the top Democrat on the Privacy Subcommittee, I strongly believe that all Americans have a fundamental right to privacy,” he adds. “All the time, we learn about and actually experience new technologies and innovations that, just a few years back, were difficult to even imagine. While these developments are often great for families, businesses, and our economy, they also raise important questions about how we protect what I believe are among the most pressing issues facing consumers: privacy and security.”

Tags: , , ,

Introducing ShazzleMail Email and How it Works

Privacy is your Fundamental Human Right.

Our Daily Blog
privacy-coins-and-bitcoin-dominance-guide
Privacy Coins and Bitcoin Dominance Guide
August 7, 2018

The advent of Bitcoin has proved to be a key landmark in the way that money is thought about because...

Read more
Web threat
Privacy Coins Fall Through The Ranks As Market Caps Decline
July 30, 2018

Bitcoin.com has reported that the market caps for many privacy coins have decreased significantly ov...

Read more
venmo_pub_priv
SECURITY NEWS THIS WEEK: MAYBE GO AHEAD AND MAKE YOUR VENMO PRIVATE
July 25, 2018

THIS WEEK STARTED with a controversial, widely derided meeting between President Trump and Russian l...

Read more
4000
WhatsApp WARNING – Chat app blasted in damning new report on privacy
July 17, 2018

The Electronic Frontiers Foundation, EFF, has published its latest annual privacy audit, dubbed Who ...

Read more
imrs
SECURITY NEWS THIS WEEK: CARRIERS STOP SELLING LOCATION DATA IN A RARE PRIVACY WIN
June 26, 2018

WHAT'S THAT? A week with nearly as much good news as bad in the world of privacy and security? It's ...

Read more