Have you created a ShazzleMail account on your smartphone? This is a required first step.

Yes No

Free Encrypted Email

Posts Tagged ‘#snowden’

main-snowden

Snowden’s new app is a step forward in privacy protection – former MI5 officer

December 27, 2017

Ed Snowden’s new personal security app can be a good tool to protect individual privacy in the technological arms race of those seeking privacy and its invaders, but it also has a flip side, says a former MI5 intelligence officer.
Last week, the president of the Freedom of the Press Foundation, Edward Snowden, presented a “personal security system,” Haven, aimed at helping individuals such as investigative journalists, political and social activists to track “their possessions and physical spaces when they are away from them.”

Haven can turn any Android phone into a surveillance system. Annie Machon, a former MI5 intelligence officer, says this system can be very helpful to “activists, journalists, and whistleblowers,” but also has the danger of being turned into a spying device.

“Most people focus on the tech security these days, but if you’re going into potentially hostile territory – you’re staying in hotels – it is not just your laptop that’s under threat, it is also the holistic physical security you need to think about. So, having something that you can leave in your hotel room, even if you take your laptop with you to check, to see if someone hostile has come into your room, potentially planted a bug or fiddled with your belongings or whatever, I think that could be very useful indeed,” Machon said.

However, she doesn’t exclude that the device may be hijacked as well or used with malignant intentions.

“That can have a good side if there is a domestic abuse going on, for example, and you want to record proof of it and then go to the police. It can have a bad side, because you might feel you’re being stalked by a jealous ex-spouse or spouse. So, there are always pluses and minuses with these new technologies. But just to have this as tool is a step forward,” she said.

Machon said that if one is out doing things that require privacy and security, this is another wonderful tool in the arsenal of our protection, but people should not forget about old-school spy measures while relying on technology.

“This is something that is going to be used in your absence, so it can’t spy on you at the time, because you’re going to be out of whichever room you want to ensure is safe. So if it’s hacked, it’s hacked – and you have to take other measures,” she said.

“And we are all very paranoid about modern tech, but if you were really a high value target, even way back in the 1980s or 1990s, then your telephones could be turned against you, the old tech could be turned against you, just as the new tech was.”

A new app developed by NSA whistleblower Edward Snowden promises to harness the surveillance power of your smartphone. None of the data collected by Haven is sent to outside servers, meaning people remain in full control of their data. It was developed in conjunction with the Guardian Project, a global collective of software developers and activists who create open-source mobile security technology.

“I have no doubt that Edward Snowden knows what he’s doing. He is trying to develop this as open source. He with his background, as NSA whistleblower, will be very aware of the sort of issues that state level activists might be able to deploy. I am sure he will build in a lot of very paranoid technology to keep people safe,” Machon said.

Machon said that there had always been “sort of arms race between what the attackers can do to invade your privacy, and what you can do to try and protect your privacy as a concerned citizen, who requires that basic human right.”

She gave the example of drug cartels in Latin America in the 1980s. “They had so much money – they were effectively running their own mercenary intelligence agencies to take counter surveillance measures. And they did it incredibly well,” she said.

Tags: , ,

main-snowden

Russia extends Edward Snowden’s asylum to 2020

February 7, 2017

Edward Snowden’s leave to remain in Russia has been extended until 2020, Russia’s Foreign Ministry spokeswoman Maria Zakharova has confirmed to CNN.

Snowden, a former US National Security Agency contractor, sought asylum in Russia in June 2013 after leaking volumes of information on American intelligence and surveillance operations to the media.
On Tuesday, Zakharova announced an extension of a “couple of years” in a Facebook post that criticized former CIA acting director Michael Morell for an opinion piece he wrote suggesting that Russian President Vladimir Putin should consider returning Snowden to the United States as “the perfect inauguration gift” to President-elect Donald Trump.
Snowden settled in Moscow after initially traveling to Hong Kong following his 2013 public disclosure of classified information. The Russian government granted him asylum soon after.
In August 2014, Snowden received a three-year extension to his leave to remain in Russia. That extension was due to expire this year.

In the final weeks of the Obama administration, more than a million supporters have petitioned the White House to pardon Snowden.
However, the White House said Tuesday that Snowden had not submitted official documents requesting clemency. He is accused of espionage and theft of government property.
Also Tuesday, Snowden thanked President Barack Obama in a tweet for his decision to commute the sentence of former Army soldier Chelsea Manning, who was convicted of stealing and disseminating 750,000 pages of documents and videos to WikiLeak

Tags: , ,

88e3788f-55d7-4420-adbb-3fde78baefb7-1020x612

Is an NSA contractor the next Snowden? In 2017, we hope to find out

January 27, 2017

We covered a ton of legal cases in 2016.
The entire Apple encryption saga probably grabbed the gold medal in terms of importance. However, our coverage of a California fisherman who took a government science buoy hostage was definitely our favorite. The case was dropped in May 2016 after the fisherman gave the buoy back.

Among others, we had plenty of laser strike cases to cover. There were guilty verdicts and sentencing in the red-light camera scandal that consumed Chicago. The Federal Trade Commission settled its lawsuit with Butterfly Labs, a failed startup that mined Bitcoins. A man in Sacramento, California, pleaded guilty to one count of unlawful manufacture of a firearm and one count of dealing firearms—he was using a CNC mill to help people make anonymous, untraceable AR-15s.

While we do our best to cover a wide variety of civil and criminal cases, there are five that stand out to us in 2017. These cases range from privacy and encryption, to government-sanctioned hacking, to the future of drone law in America.

Drone’s up, don’t shoot

After neighbor shot down his drone, Kentucky man files federal lawsuit
In 2016, we reported on another drone shooting incident (seriously folks, don’t do it!) in Virginia. A 65-year-old named Jennifer Youngman used her 20-gauge shotgun to take out what many locals believe was a drone flying over her neighbor, Robert Duvall’s, adjacent property. Yes, that Robert Duvall.
“The man is a national treasure and they should leave him the fuck alone,” she told Ars.

Youngman touched on a concept that many Americans likely feel in their gut but has not been borne out in the legal system: property owners should be able to use force to keep unwanted drones out of their airspace. But here’s the thing: for now, American law does not recognize the concept of aerial trespass.

At this rate, that recognition will likely take years. Meanwhile, drones get more and more sophisticated and less expensive, and they have even spawned an entire anti-drone industry.

Legal scholars have increasingly wondered about the drone situation. After all, banning all aircraft would be impractical. So what is the appropriate limit? The best case law on the issue dates back to 1946, long before inexpensive consumer drones were feasible. That year, the Supreme Court ruled in a case known as United States v. Causby that Americans could assert property rights up to 83 feet in the air.

In that case, US military aircraft were flying above a North Carolina farm, which disturbed the farmer’s sleep and upset his chickens. As such, the court found that Farmer Causby was owed compensation. However, the same decision also specifically mentioned a “minimum safe altitude of flight” at 500 feet—leaving the zone between 83 and 500 feet as a legal gray area.

“The landowner owns at least as much of the space above the ground as he can occupy or use in connection with the land,” the court concluded.

In 2015, a Kentucky man shot down a drone that he believed was flying above his property. The shooter in that case, William Merideth, was cleared of local charges, including wanton endangerment.

Man who built gun drone, flamethrower drone argues FAA can’t regulate him
By January 2016, the Kentucky drone’s pilot, David Boggs, filed a lawsuit asking a federal court in Louisville to make a legal determination as to whether his drone’s flight constituted trespassing. Boggs asked the court to rule that there was no trespass and that he is therefore entitled to damages of $1,500 for his destroyed drone.
Although the two sides have traded court filings for months, the docket has not been updated since June 2016, when Boggs’ attorneys pointed to a recent case out of Connecticut that found in favor of the Federal Aviation Administration’s regulation of drones.

As Boggs’ legal team wrote:

The Haughwout pleadings are directly relevant to the subject matter jurisdiction issue currently before the court. The current dispute turns on whether a controversy has arisen that cannot be resolved without the Court addressing a critical federal question—the balance between the protection of private property rights versus the safe navigation of federal airspace. The Haughwout dispute places this critical question in the context of an administrative investigation. It highlights, as argued by Mr. Boggs—and now the FAA—that questions involving the regulation of the flight of unmanned aircraft should be resolved by Federal courts.
US District Judge David J. Hale has yet to schedule any hearings on the matter.

KickassTorrents lawyer: “torrent sites do not violate criminal copyright laws”
In July 2016, federal authorities arrested the alleged founder of KickassTorrents (KAT). The arrest was part of what is probably the largest federal criminal complaint in an intellectual property case since Megaupload, which was shuttered in early 2012. (That site’s founder, Kim Dotcom, has successfully beat back efforts to extradite him from New Zealand to the United States. He was ordered extradited a year ago, but that court decision is now on appeal.)
In the case of KAT, Ukranian Artem Vaulin, 30, was formally charged with one count of conspiracy to commit criminal copyright infringement, one count of conspiracy to commit money laundering, and two counts of criminal copyright infringement. Vaulin was arrested in Poland, where he remains in custody pending a possible extradition to the United States.

Like The Pirate Bay, KAT does not host individual infringing files but rather provides torrent and magnet links so that users can download unauthorized copies of TV shows, movies, and more from various BitTorrent users.

According to the 50-page affidavit, Vaulin and KAT’s claims that they respected the Digital Millennium Copyright Act were hogwash. The affidavit was authored by Jared Der-Yeghiayan, who is a special agent with Homeland Securities Investigations and was also a key witness in the trial of Silk Road founder Ross Ulbricht.

Vaulin has since retained Dotcom’s lawyer, Ira Rothken, who has made similar arguments in court filings on behalf of his more famous client. Namely, that there is no such thing as secondary criminal copyright infringement, and while some files uploaded to KAT may have violated copyright, that does not make Vaulin a criminal.

Rothken has not yet been able to directly correspond with or even meet his Ukrainian client (and has to do so only through Polish counsel). Nevertheless, he filed a motion to dismiss in October 2016. The government responded weeks later, and Rothken filed another response on November 18.

Prosecutors, for their part, said that the Rothken-Vaulin theory was ludicrous: “For the defendant to claim immunity from prosecution because he earned money by directing users to download infringing content from other users is much like a drug broker claiming immunity because he never touched the drugs.”

The two sides met before US District Judge John Z. Lee for a status conference on December 20, 2016. Judge Lee has not yet ruled on the motion to dismiss.

Tags: , ,

snow1

Snowden’s email service of choice, Lavabit, lives again

January 25, 2017

Lavabit is back. Wait, what’s Lavabit?

It’s an encrypted email service most famous for its connection to Edward Snowden, the former NSA contractor who leaked classified documents to journalists in 2013. Lavabit shut down rather than comply with an order to allow the US government access to user emails in a bid to see Snowden’s communications.

The service relaunched Friday, also Inauguration Day, with a revamped approach to security.

“Regardless of one’s political disposition, today we acknowledge our shared values of Freedom, Justice, and Liberty as secured by our Constitution,” wrote company founder Ladar Levison on the Lavabit homepage. “This is the reason why I’ve chosen today to relaunch Lavabit.”

The email service comes with a variety of options that give users say over their data, but at its foundation, the idea is to scramble up emails so that only the sender and recipient can read them.

Encrypted Email And Lavabit
Lavabit chief predicts ‘long fight’ with feds (Q&A)
Lavabit founder says he fought feds to protect the Constitution
Hiding your tracks from Trump: Online privacy worries heat up
Encrypted email has been around for more than 20 years, but it’s notoriously user unfriendly. Lavabit’s most basic service will be email that’s encrypted automatically, requiring the least amount of technical expertise for users to keep their messages private. This level of service is called “trustful,” because it requires users to trust that Lavabit has this encryption thing under control.

Two other levels of service (“Cautious” and “Paranoid”) put more control in the hands of users, but also require more tech savvy. Lavabit allows users to download the source code for its email server and run their own servers at home.

New users can register now, but will have to wait until later this year to start using their Lavabit accounts. Users whose accounts were suspended can start them up again.

Levison didn’t immediately respond to a request for comment.

Tags: , , , ,

pr

The Watchers. Assaults on privacy in America

January 5, 2017

PEOPLE BEHAVE DIFFERENTLY when they think they are being watched? When former National Security Agency contractor Edward Snowden revealed the mass surveillance of American citizens in June 2013, the question suddenly grew in importance. Can the behavior of an entire population, even in a modern democracy, be changed by awareness of surveillance? And what are the effects of other kinds of privacy invasions?

Jon Penney was nearing the end of a fellowship at Harvard Law School’s Berkman Klein Center for Internet & Society in 2013, and he realized that Snowden’s disclosures presented an opportunity to study their effect on Americans’ online behavior. During research at Oxford the following year, Penney documented a sudden decline in Wikipedia searches for certain terrorism-related keywords: Al Qaeda, Hezbollah, dirty bomb, chemical weapon, and jihad, for example. More than a year later, when the study ended, such searches were still declining. “Given the lack of evidence of people being prosecuted or punished” for accessing such information, Penney wrote in the Berkeley Technology Law Review (which published his research last June), he judged it unlikely that “actual fear of prosecution can fully explain the chilling effects suggested by the findings of this study.” The better explanation, he wrote, is self-censorship.

Penney’s work is the sort of evidence for negative social effects that scholars (and courts of law) demand. If democratic self-governance relies on an informed citizenry, Penney wrote, then “surveillance-related chilling effects,” by “deterring people from exercising their rights,” including “…the freedom to read, think, and communicate privately,” are “corrosive to political discourse.”

“The fact that you won’t do things, that you will self-censor, are the worst effects of pervasive surveillance,” reiterates security expert Bruce Schneier, a fellow at the Berkman and in the cybersecurity program of the Kennedy School’s Belfer Center for Government and International Affairs. “Governments, of course, know this. China bases its surveillance on this fact. It wants people to self-censor, because it knows it can’t stop everybody. The idea is that if you don’t know where the line is, and the penalty for crossing it is severe, you will stay far away from it. Basic human conditioning.” The effectiveness of surveillance at preventing crime or terrorism can be debated, but “if your goal is to control a population,” Schneier says, “mass surveillance is awesome.”

That’s a problem, he continues, because “privacy is necessary for human progress. A few years ago we approved gay marriage in all 50 states. That went from ‘It’ll never happen’ to inevitable, with almost no intervening middle ground.” But to get from immoral and illegal to both moral and legal, he explains, intervening steps are needed: “It’s done by a few; it’s a counterculture; it’s mainstream in cities; young people don’t care anymore; it’s legal. And this is a long process that needs privacy to happen.”

As a growing share of human interactions—social, political, and economic—are committed to the digital realm, privacy and security as values and as rights have risen in importance. When someone says, “My life is on my phone,” it’s meant almost literally: photos, passwords, texts, emails, music, address books, documents. It is not hard to imagine that the Declaration of Independence, redrafted for an information society, might well include “security and privacy,” in addition to the familiar “life, liberty, and the pursuit of happiness,” among its examples of “unalienable rights.”

Although Snowden highlighted government surveillance, it may not be the worst problem. Corporations hold vast and growing troves of personal information that is often inadequately protected, its use largely unregulated. Since 2005, hackers have stolen hundreds of millions of credit-card numbers from major retailers such as Target, Home Depot, TJX, and eBay. In 2014, someone stole the keys to half a billion Yahoo accounts without being detected. And everyday threats to privacy are so commonplace that most people are numb to them. In exchange for free email, consumers allow companies such as Google to scan the content of their digital messages in order to deliver targeted ads. Users of social media, eager to keep in touch with a circle of friends, rarely read the standard agreement that governs the rights and use of what they post online. Smartphones know their owners’ habits better than they themselves do: where and with whom they sleep, what time they wake up, whom they meet, and where they have been. People accept such tradeoffs in exchange for convenience. They don’t really have a choice.

Bemis professor of international law and of computer science Jonathan Zittrain, faculty chair of the Berkman Klein Center, worries that the ubiquity of privacy threats has led to apathy. When a hacker released former Secretary of State Colin Powell’s private assessments of the two leading presidential candidates prior to the recent election, “I was surprised at how little sympathy there was for his situation, how it was treated as any other document dump,” Zittrain explains. “People have a hard time distinguishing, for instance, between government documents and private documents authored by people who were once government officials, [between] documents released under the Freedom of Information Act, and documents leaked by a whistleblower. It’s all just seen as…‘stuff is porous, and we can get it.’” As “the ability to hack is democratized,” Zittrain worries that people have lost sight of the original value behind whistleblowing, which is to make powerful institutions publicly accountable. Now everyone is vulnerable. “Over time,” he wrote recently, “continued leaks will lead people to keep their thoughts to themselves, or to furtively communicate unpopular views only in person.” “That does not seem sustainable to me,” he said in an interview, “and it doesn’t seem healthy for a free society.”

The perception that the Information Age has put privacy and security at risk is widespread. Necessarily, the search for solutions is equally broad-based. In Washington, D.C., Marc Rotenberg ’82, president and director of the Electronic Privacy and Information Center (EPIC), seeks legal solutions to privacy problems (see page 60). At Harvard, research into privacy and security is focused at the Berkman Klein Center; at the Paulson School of Engineering and Applied Sciences’ Center for Research on Computation and Society; at the Kennedy School’s cybersecurity program; at the Institute for Quantitative Social Science’s (IQSS) Data Privacy Lab; and also within the schools of medicine and public health (and at the affiliated hospitals), where researchers seek to protect patient data so that it can be shared appropriately, particularly in the case of rare conditions. Solutions to privacy and security problems thus involve computer scientists and legal scholars, as well as experts in healthcare, government, and business.

ASSURING THE PRIVACY OF INFORMATION means making it secure. “I actually can’t give you privacy unless you have security,” Bruce Schneier points out: that involves protecting data through technological or legal means. Door locks, tall fences, and burglar alarms work well in the physical world. The problem, he explains, is that “in security, technology scales badly.” If a burglar gets past a lock to rob a single house in a community of 100,000 people, that may be a tolerable risk. But anyone who finds a flaw in all digital locks could break into every home. “What happens,” Schneier asks, “when systems become connected such that our risks are connected?”

Ordinary individuals, he points out, can do very little to mitigate this kind of systemic risk. Advice like don’t have an email address, don’t use your credit card, is “moronic. You can’t be a fully functioning human being in the twenty-first century [like that.] So in a lot of ways, we have lost control.”

In the past 15 years, entire corporations, even nations, have found their data and systems vulnerable to attack. The intrusion at the U.S. Office of Personnel and Management, disclosed in April 2015, was reportedly the most significant breach of federal networks to date: hackers, thought to be state-sponsored, took personal data for four million employees and political appointees, leading to the recall of American intelligence agents posted abroad. The 2016 digital break-in at the Democratic National Committee’s headquarters was like a modern iteration of Watergate, but initiated by a foreign power seeking to interfere in the presidential election.

The stakes can become very high indeed. “Someone is learning to take down the Internet,” wrote Schneier in September. He described how an unidentified entity had been probing the defenses of companies that provide critical Internet infrastructure, slowly ramping up repeated, carefully metered attacks, as if seeking to quantify precise points of failure. Although his best-selling book, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World, has led to his reputation as a consumer-privacy-rights advocate, Schneier is also chief technology officer for Resilient, an IBM company that handles online incident response. He brings that security background to a new fellowship at the Kennedy School’s Cyber Security Project. The project focuses on policy research into the U.S. military’s operations in cyberspace; it puts “people with a technical background together with people with policy experience,” in order to help inform debates in Washington, says project director Michael Sulmeyer, former director for plans and operations for cyber policy at the Department of Defense. “One of the biggest debates going forward will be the roles and missions for the military’s 6,000-person force for cyberspace operations.”

That Cyber Command is charged with protecting the Defense Department’s weapons systems, millions of computing devices, and more than 15,000 data networks (say, in support of network operations for a battalion in Afghanistan fighting the Taliban). It also provides offensive cyber capabilities to commanders around the world in the event that hostilities break out (analogous to the access they have to air and sea power capabilities). And it is responsible for defending the nation—including aviation, financial, and power-transmission systems—against a significant cyberattack.

The structure of the Internet itself makes that defensive mission difficult. Eviatar Matania, the head of Israel’s National Cyber Bureau, discussed that challenge last September at the Kennedy School. He noted that unlike the agricultural and industrial revolutions, the cyber revolution has both restructured society and created a space, “a new artificial domain.” Israel’s bureau was founded five years ago as a way to allow that small country to be “much bigger and stronger than in a physical domain,” Matania continued. But defending cyberspace is extremely difficult because it lacks both borders and distance. There are no clear boundaries between countries, and no clear divisions between corporate and government networks: “Everyone is connected to everyone.”

That implies that the defense mission is expansive. Admiral Michael Rogers, director of the NSA and head of U.S. Cyber Command, said during an October visit to the Kennedy School that the unit increasingly finds itself “helping defend systems across the broader U.S. government” and even “being called upon to…help within the private sector. These are big growth areas for us.”

“The increased interconnectivity of the world we are living in [has led to] a level of vulnerability that we don’t truly understand.”
But as the mission grows, vulnerabilities are becoming more complex, not less. The Internet of Things—chip-equipped, network-connected household items such as living-room televisions that can respond to commands to change the channel—present huge security (not to mention privacy) concerns. “The increased interconnectivity of the world we are living in,” explained Rogers, has led to “a level of vulnerability that we don’t truly understand.” The automobile, for example, used to be “a mechanical system with a one-way radio”; today it’s “a series of interconnected software applications and capabilities,” involving a host of remote connections that the driver doesn’t understand or even know about. “That offers both amazing capability, insight, and knowledge—data that could make the car safer, make decisions faster, and eventually lead to remotely piloted autonomous vehicles.” But “that car now has a whole lot of vulnerabilities that it never had before.”

Openness: “We Have to be Extremely Skeptical”

IT MAY SEEM LOGICAL for a centralized military organization to provide national cybersecurity and defend against cyber war. But Yochai Benkler points out how 9/11 led to war and “unjustified claims for extending surveillance powers, or extending detention and kidnapping powers, let alone torture.” The Berkman professor for entrepreneurial legal studies argues that “We have to be extremely skeptical of claims made in the name of national security in general, not because the people making them are bad people, but because the people making them…operate in a world where the only downside to failing to extend their power is that one day somebody will look at them and say, ‘Where were you when the world came down?’

“We should take with many grains of salt the claims of national security experts who see cyber war as the next domain,” he continues, “and operate in an environment where they want to control everything as much as possible in order to minimize risks, but come to their conclusions from a framework that…is relatively insulated from potential alternative viewpoints.”

Accordingly, Benkler advocates systems that allow personal data to remain in the hands of consumers—minimizing the privacy risks posed by governments, corporations, and hackers because personal information is not concentrated in a single place. (The technical term is “distributed network ecosystems based on open-source software.”) “Relying on a small number of high-end companies to provide security creates a single point of failure for hundreds of millions,” he says, referring to the 2014 theft of Yahoo user accounts. “If all those…people had decentralized email storage at home, and sign-on credentials that were not valid for diverse critical sites, collecting [that information] would be much harder.”

“It’s a challenge to get people to adopt safe habits,” he admits, “but it’s not impossible. You have to change users’ culture, and you have to design secure systems that are under the control of end users, not single companies.” The iPhone, secured with a PIN or a fingerprint, is an example of such encrypted, secure-by-default systems. Such devices aren’t hard to build—but, he says pointedly, “It’s hard to do so [within] a business model that depends on spying on your customers so you can sell them to advertisers.”

Furthermore, says Benkler, systems built in part with “free software developed by communities that don’t have the imperatives either of profit-making companies, or of dealing with the tensions between rights and the state of emergency, get better as their vulnerabilities are constantly probed, exposed, and then corrected in a constant, evolutionary, back and forth.” Such robustness is obviously desirable.

But it may not be as practicable as he hopes. Although the idea that users can enjoy more privacy and better security in a distributed computing environment is becoming more tangible as smartphones’ computing power rivals that of desktops, executing it consistently poses significant challenges. Ben Adida, a software engineer and architect and former fellow of Harvard’s Center for Research on Computation and Society, acknowledges this is “the vision that many security advocates, myself included, pushed for for a very long time.”

But now he thinks “we are far less secure” adopting that technological approach. (For a computer scientist’s perspective, and a description of a project to protect research data involving human subjects, see the online extra, “The Privacy Tools Project.”) Adida developed Helios, one of the first encrypted yet verifiable online voting systems; he’s now head of engineering at Clever, a startup that manages private student data for schools. Providing security to a range of companies has led him to discover how easy it is for small companies to err when implementing and defending the security of their systems, whether in cryptography, access control, network-level security, or in the internal audit processes used to ensure data is compartmentalized. A large company like Google, on the other hand, “does a really good job of making sure that only I can log in,” he explains. “They’ve added two-factor authentication, they have all sorts of heuristics to check whether a person is logging in from a different location than usual. There’s all sorts of work that they do to make sure that only the right people are accessing the right data.”

Like Benkler, Adida agrees that centralized data is too easily accessed by law enforcement, but says that for now, “We need to rethink how to defend that data through a combination of legal and technical means.” Technically, that might mean discarding chats more than few months old, for example; and legally, resisting official requests for user data in court. He advocates “evolution in the law, too.” The Fourth Amendment guarantees the “right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures…,” but historically, that has been interpreted to mean that obtaining data held by a third party doesn’t require a search warrant. That means personal documents stored in Google’s cloud, for example, are exposed. Adida says he nevertheless keeps “extremely private data hosted by a third party because that is the right operational thing to do. Everybody hosting their own stuff just doesn’t make any sense”—but he hopes that someday, if the government wants access to that information, it “would require a warrant, just as if they were knocking down someone’s door.”

Confidentiality: “Privacy Is about Accountability”

IN THE HERE AND NOW, using encryption, firewalls, and passwords is one way to keep information secret. But secrecy is just “a very small slice” of what privacy is about, says Marc Rotenberg of EPIC. Through “creative advocacy, litigation, and public engagement,” the Washington, D.C.-based nonprofit aims to shape policy and advance the legal framework for safeguarding personal liberty. Rotenberg, an attorney and adjunct professor at Georgetown University Law Center, has won cases before the Supreme Court, filed numerous amicus briefs, testified before Congress, and given awards to leading privacy advocates across the political spectrum.

“Privacy is about accountability,” he says. “It’s about the fairness of decisionmaking. It’s about holding large government actors and private companies accountable for their decisionmaking. It turns out to be an extraordinarily powerful and comprehensive human-rights claim, particularly in the digital age, because so much about us is based on our data.”

“Privacy turns out to be an extraordinarily powerful and comprehensive human-rights claim, particularly in the digital age, because so much about us is based on our data.”
Getting a loan or health insurance, or gaining admission to a certain school, are all data-driven determinations, Rotenberg points out. He asks how those data are being used. What personal information does an organization consider relevant? Are people pulled out of line at an airport because of their nationality, their religion, or because of a book purchased on Amazon? Given all the ways in which personal information drives decisions, Rotenberg says, secrecy “almost isn’t even relevant to the discussion. Because paradoxically, what we keep secret is almost certainly what we don’t need privacy law for. We need privacy law for everything else: for the things that we don’t have the physical ability to control. When you give sensitive test information to your doctor, for example, it’s no longer in your control. The credit card company has all your transactional records. What are you going to do? Nothing. That’s when we start to ask questions about what type of safeguards are in place to protect our personal information held by others.”

“I see privacy as closely tied to the strength of democratic governance,” he continues. Recalling the first time he read the NSA’s foreign intelligence surveillance court order demanding that Verizon turn over all customer telephone-call records (perhaps the most significant of Snowden’s revelations), Rotenberg says, “I looked at that order, ‘Provide all records, because all records are relevant,’ and actually thought it was satirical, a joke from The Onion, or an exercise attached to a privacy-law exam asking students to draft an unlawful court order.…And then I realized it was a real order—that the NSA thought it had the authority to collect all domestic telephone records on all U.S. telephone customers.”

EPIC brought a petition to the Supreme Court arguing that the Foreign Intelligence Surveillance Court had exceeded its legal authority, and a broad coalition of legal experts and former members of Congress joined the campaign. But the Court did not rule on the merits of the petition. “That was after the Solicitor General twice sought extensions,” Rotenberg explains, “which gave the foreign intelligence surveillance court enough time to issue an opinion justifying the program. We call that just-in-time lawmaking.” The EPIC petition nevertheless marked the beginning of a broad bipartisan coalition to pass legislation, the USA Freedom Act of 2015, ending the NSA’s bulk collection of such information.

Such battles almost never stay won, says Rotenberg. “The Europeans were very upset, obviously, about the U.S. surveillance activities that Snowden had documented, but then you had the terrible tragedy of Charlie Hebdo, and suddenly the French government created new surveillance authorities that go beyond what the U.S. does.”

“When governments make these decisions,” he reflects, “it is almost as if they’re saying, ‘We can’t afford as much democracy, we can’t afford as much openness, we can’t afford to trust our citizens as much, we need to engage in more surveillance, we need less judicial review and less accountability.’” But privacy, he says, is not a trade-off: “I’ve been in Washington long enough to know that when someone says, ‘We need to strike the right balance,’ it means they probably don’t know what they’re talking about. A sacrifice of privacy is also a sacrifice of democracy.”

In the mid 1990s, The New York Times quoted Rotenberg saying that the protection of privacy in the Information Age would be like the protection of the environment in the Industrial Age—“which is to say it’s so much a part of the nature of economic production today, you don’t solve it, you have to manage it.” Many people predicted the end of privacy. But Rotenberg believes people don’t understand the full consequences: “Among other things, you would lose your democratic state if everyone said, ‘Why do we care if the government knows everything about us? Who needs a private phone call? Who needs a building with walls? Why should data be accurate?’ Everything collapses. And we know what that world looks like: that’s what [Jeremy] Bentham described as the Panopticon”—designed so an observer can watch everything, but without being seen. “When you’re under constant surveillance,” says Rotenberg, “you’re in a prison.”

On the corporate front, EPIC brought the complaint that forced Snapchat, the photo-sharing service, to fulfill its promise to delete images. When Google tried to move all Gmail users onto Buzz, its social-media platform, EPIC complained to the Federal Trade Commission (FTC), and established a significant precedent for Internet privacy. When WhatsApp announced that it would share users’ secure-message data with Facebook (which had recently acquired the company), EPIC intervened. Likewise, when Facebook started changing user privacy settings after consumers had set them, EPIC brought the matter to the FTC, which stopped the practice. Most recently, EPIC has been active in the discussion over how student data are collected and used.

EPIC may seem the proverbial finger in the dike, barely holding back the flood. But Rotenberg says he is “actually a bit of an optimist about all of this,” citing the Supreme Court’s “remarkable 9-0 opinion, written by Chief Justice Roberts, that says the search of a cell phone following an arrest requires a warrant”—a case in which EPIC’s extensive brief was cited. Rotenberg calls the 2014 decision “a strong statement about privacy in the modern age. And the fact that it was a unanimous court, I think, was remarkable.”

EPIC also studies diverse privacy laws to advance legal protections. A project begun in 2015 to identify states with the best privacy laws examines data security and breaches, drone surveillance, police body cameras, and student privacy, to name a few. EPIC considers Massachusetts’s 2007 data-protection law one of the best in the country; California has crafted very good data-breach-notification regulations. Farther afield, Rotenberg admires the European Court of Justice’s decision on the “right to be forgotten,” which involved personal bankruptcy records that had been published in a newspaper 10 years earlier. The Spanish plaintiff asked both the newspaper and Google to remove the records. Spain’s privacy agency decided not to touch the newspaper, but ordered Google to remove the record from search results—drawing “a very thoughtful line” between the protected free expression of news organizations and the commercial operations of data brokers, who commodify personal information.

Discrimination:“Algorithmic Accountability”

ROTENBERG HAS RECENTLY BEGUN advocating for laws that would require companies to disclose how algorithms use personal data—for hiring, credit determinations, or online advertising. As businesses demand more information from people, he thinks companies should reveal how they make decisions. Businesses regard their algorithms as intellectual property, but Rotenberg argues that their rights “extend as far as my personal data.…And if that creates a problem for them, don’t collect my data.” The algorithms act invisibly and without accountability. Rotenberg says the solution is straightforward: “There should be algorithmic accountability. We should be able to open the code.”

One computer scientist, famous for her work on privacy technology and re-identification of anonymous subjects in large data sets, approaches this problem as a technologist, seeking to expose the inner workings of algorithms in ways that make them susceptible to existing laws.

Google seemed to think professor of government and technology in residence Latanya Sweeney might have an arrest record. A simple search for the name of this African-American computer scientist, now faculty dean of Currier House, yielded ads implying that she had a criminal past. When former Reuters reporter Adam Tanner, now an Institute for Quantitative Social Science (IQSS) fellow, suggested that resulted from her “black-sounding name,” Sweeney at first resisted his explanation. Then she discovered that a search for “Latanya” turned up images of black women, and a search for “Tanya” turned up images of whites. She decided to dig deeper.

Because she runs Harvard’s Data Privacy Lab, based in IQSS, Sweeney has resources to find out what makes an algorithm tick. Using lists of first names given more often to black babies than to white ones, she Googled the names of real people from Internet addresses around the country, capturing 100,000 ad impressions. For some names, ads implied the existence of an arrest record as much as 80 percent of the time, even when there was none. “Blacks are a protected group. Employment is a protected setting,” she notes. If an employer Googles an applicant’s name and ads pop up implying that there is an arrest record, she says, that is enough to trigger a federal discrimination investigation.

Her work showed, Sweeney says, that these unforeseen consequences can be studied and the results used “to empower the government structures we already have for oversight.” Rather than demanding new laws that focus on new technologies, she used science to expose the workings of technology, so existing law could be applied.

Armed with this tool for “algorithmic accountability,” Sweeney took a year’s sabbatical in 2014 to work as chief technology officer at the FTC. The commission had lacked pertinent technological expertise to investigate the issue; Sweeney’s presence persuaded the chairwoman to hire additional technologists.

While at the commission, Sweeney studied the practices of advertisers targeting the sites of sororities, fraternities, and other student groups, including Omega Psi Phi, a black fraternity celebrating its centennial. Ads routed to its website included options for graduate education and for travel—and one that implied the need for a criminal lawyer. Credit-card ads included only the lowest-ranked cards, whereas Sweeney found that the sites of similar fraternal student organizations turned up ads for American Express Blue. How, she wondered, did that decisionmaking occur in a supposedly neutral algorithm? “If, through their practices, technology companies are dominating the online experience” and shaping people’s experiences of the Internet, she says, “then it’s those practices that have to be addressed, or at least connected to…societal norms. Just because Google or Facebook implement business practices and technology together in a package in a certain way doesn’t mean that’s the only way. The technology…and the business practices didn’t have to be that way. And that has to be unpacked.”

Commerce: “Surveillance Capitalism”

SHOSHANNA ZUBOFF, the Wilson professor of business administration emerita, would agree. She thinks about the information landscape in economic terms and says that there is even more at stake than privacy. Zuboff says that corporate use of personal data has set society on a path to a new form of capitalism that departs from earlier norms of market democracy.

She draws an analogy from the perfection of the assembly line: Ford engineers’ discovery a century ago, after years of trial and error, that they had created “a logic of high-volume, low-unit cost, which really had never existed before with all the pieces aligned.” Today, many corporations follow a similar trajectory by packaging personal data and behavioral information and selling it to advertisers: what she calls “surveillance capitalism.”

“Google was ground zero,” Zuboff begins. At first, information was used to benefit end users, to improve searches, just as Apple and Amazon use their customers’ data largely to customize those individuals’ online experiences. Google’s founders once said they weren’t interested in advertising. But Google “didn’t have a product to sell,” she explains, and as the 2001 dot.com bubble fell into crisis, the company was under pressure to transform investment into earnings. “They didn’t start by saying, ‘Well, we can make a lot of money assaulting privacy,’” she continues. Instead, “trial and error and experimentation and adapting their capabilities in new directions” led them to sell ads based on personal information about users. Like the tinkerers at Ford, Google engineers discovered “a way of using their capabilities in the context of search to do something utterly different from anything they had imagined when they started out.” Instead of using the personal data to benefit the sources of that information, they commodified it, using what they knew about people to match them with paying advertisers. As the advertising money flowed into Google, it became a “powerful feedback loop of almost instantaneous success in these new markets.”

“Those feedback loops become drivers themselves,” Zuboff explains. “This is how the logic of accumulation develops…and ultimately flourishes and becomes institutionalized. That it has costs, and that the costs fall on society, on individuals, on the values and principles of the liberal order for which human beings have struggled and sacrificed much over millennia—that,” she says pointedly, “is off the balance sheet.”

Privacy values in this context become externalities, like pollution or climate change, “for which surveillance capitalists are not accountable.” In fact, Zuboff believes, “Principles of individual self-determination are impediments to this economic juggernaut; they have to be vanquished. They are friction.” The resulting battles will be political. They will be fought in legislatures and in the courts, she says. (See EPIC’s cases, above.) Meanwhile, surveillance capitalists have learned to use all necessary means to defend their claims, she says: “through rhetoric, persuasion, threat, seduction, deceit, fraud, and outright theft. They will fight in whatever way they must for this economic machine to keep growing.” Consumer-citizens feel the assault, but for the surveillance capitalists, their creation is like “a living organism now, that has to grow.”

“In surveillance capitalism, rights are taken from us without our knowledge, understanding, or consent, and used to create products designed to predict our behavior.”
“Privacy,” according to Zuboff, “is having the right to decide how you want to live, what you want to share, and what you choose to expose to the risks of transparency. In surveillance capitalism, those rights are taken from us without our knowledge, understanding, or consent, and used to create products designed to predict our behavior.” These products are then sold into new markets that she calls “behavioral futures markets.” At each stage, “our lives are further exposed to others without our consent.” In losing decision rights, we lose privacy, as well as autonomy and self-determination. Such rights don’t vanish, she points out. “We lose them to someone else. Google is an example of a company that amasses ‘decision rights’ that once belonged to us. Decision rights are fundamentally political. So these are concentrations of political power, in institutions that we have not authorized. We didn’t elect them, we didn’t vote for them, we didn’t sanction this transfer of rights and power.”

Targeted ads—about which consumers already express concern—are the beginning of a much more ambitious program of modifying and influencing behavior toward profitable ends, Zuboff argues. “No one ever said mass production was only for automobiles, and surveillance capitalism isn’t only for advertisers.” There are many other companies and industries, she says, that want to participate in the new behavioral futures markets. Early examples include sectors such as insurance, retail, and health.

Behavioral futures markets develop in stages, she explains. Predictive analytics is a familiar use of data in which patterns are identified in order to predict whether somebody might be pregnant, or getting married, or has just lost a loved one. (The technique is already being used to place police officers in locations where crime is more likely to occur.) Zuboff notes that Google Maps, to take another example, recently introduced a feature that suggests a destination based on what it knows about users before they’ve even indicated where they’re going. “Maybe it picked up from an email that you’ve recently moved and need to get tools for the workshop,” Zuboff explains, “so it suggests a hardware store that you can go to. Would you think that hardware store is an innocent recipient of Google’s largess?”

The stakes are getting higher. She points to the wildly popular game Pokémon Go, which rewards players with virtual experiences. “I can send you to the dry cleaner, I can send you to the car mechanic, I can send you to the restaurant—anywhere I want to with this reward system. All these entities pay to play in the new marketplace for behavior.” Even before the game launched in Japan, McDonald’s had paid to designate its 3,000 restaurants as destinations (called “gyms”) within the game. The game’s developer is Niantic, formerly a lab within Google run by John Hanke, who also led Google’s geolocation services. (The core mapping technology was funded by the CIA’s venture-capital arm.) Having mapped a virtual world onto the physical one with Google Maps and Google Earth, use of smartphone location services closes the loop, populating that cyber domain with people in the physical world.

At the moment, the project is “allowing the public to get exposed to this kind of interaction, and become habituated to it,” says Zuboff. Pokémon players have fun, without realizing that it is also another form of social and economic control.

“I think it’s very important to connect the dots,” she explains, “and see that all of this makes sense when we frame it as a new form of capitalism that has particular requirements in order to be successful. Technology is never a thing in itself. It is always designed and deployed to reflect the aims and needs of a particular economic order. Suddenly, we can see that these ventures are part of a cohesive, internally consistent, and coherent economic logic. And when we can do that, then I think as a society we are far better positioned to increase and expand our advocacy and reform efforts, [to figure out how] to successfully tether information-based capitalism to pro-social and pro-democratic values and principles,” rather than solely serving third-party economic interests. “The challenge of surveillance capitalism becomes part of the larger historical project of harnessing capitalism to society.”

Surveillance capitalism, driven by the profit motive, “has been able to gather to itself concentrations of knowledge and power that exceed anything imaginable even a few years ago,” she says. “One of its consequences is the deletion of privacy. But if we fight this only on the grounds of privacy, we’re bound to meet with constant frustration and limited success. This is an economic logic that must delete privacy in order to be successful.” This is why, despite the “brilliant and heroic scholarship” that has come out of Berkman, and despite the “brilliant and heroic advocacy that has come from many quarters in the United States, including Marc Rotenberg and his amazing team at EPIC,…this thing keeps growing.”

History may suggest better ways to respond, she says. “We have experience in taming capitalism, and binding it to pro-social and pro-democratic principles. In the late nineteenth century, the Gilded Age, there was no protection for labor, and capital had complete freedom to do whatever it wanted to do with resources, with people, with communities, producing extreme economic and social inequality along the way.” The twentieth century “was a long journey to correct that imbalance.” The social challenge now, she says, is to insist on a new social contract, with its own twenty-first century legislative and regulatory innovations, that harnesses information capitalism to democratic values and norms. This begins, she believes, with deepening social understanding and awareness. “We have to create the political context in which privacy can be successfully defended, protected, and affirmed as a human right. Then we’d have a context in which the privacy battles can be won.”

Tags: , , , , ,

Introducing ShazzleMail Email and How it Works

Privacy is your Fundamental Human Right.

Our Daily Blog
venmo
What’s Wrong With Your Venmo Account, and How to Fix It
December 4, 2018

ILLUSTRATION: RICHARD BORGE By Katherine Bindley Dec. 4, 2018 9:02 a.m. ET Few social-media e...

Read more
private
Private Blockchains Could Be Compatible with EU Privacy Rules, Research Shows
November 12, 2018

Private blockchains, such as interbanking platforms set to share information on customers, could be...

Read more
apple
Apple launches privacy portal, initiatives
October 18, 2018

Apple (NASDAQ:AAPL) launches a new privacy website letting users find personal data the company has ...

Read more
private
Just Don’t Call It Privacy
September 23, 2018

What do you call it when employers use Facebook’s advertising platform to show certain job ads onl...

Read more
static2.politico.com
Privacy and security: no simple solution, warns Rachel Dixon
September 18, 2018

The tide is turning when it comes to privacy and security, with Australians gradually becoming more ...

Read more