What do you call it when employers use Facebook’s advertising platform to show certain job ads only to men or just to people between the ages of 25 and 36?
How about when Google collects the whereabouts of its users — even after they deliberately turn off location history?
Or when AT&T shares its mobile customers’ locations with data brokers?
American policymakers often refer to such issues using a default umbrella term: privacy. That at least is the framework for a Senate Commerce Committee hearing scheduled for this Wednesday titled “Examining Safeguards for Consumer Data Privacy.”
After a spate of recent data-mining scandals — including Russian-sponsored ads on Facebook aimed at influencing African-Americans not to vote — some members of Congress are now rallying behind the idea of a new federal consumer privacy law.
At this week’s hearing, legislators plan to ask executives from Amazon, AT&T, Google, Twitter and other companies about their privacy policies. Senators also want the companies to explain “what Congress can do to promote clear privacy expectations without hurting innovation,” according to the hearing notice.
There’s just one flaw with this setup.
In a surveillance economy where companies track, analyze and capitalize on our clicks, the issue at hand isn’t privacy. The problem is unfettered data exploitation and its potential deleterious consequences — among them, unequal consumer treatment, financial fraud, identity theft, manipulative marketing and discrimination.
In other words, asking companies whose business models revolve around exploiting data-based consumer-influence techniques to explain their privacy policies seems about as useful as asking sharks to hold forth on veganism.
“Congress should not be examining privacy policies,” Marc Rotenberg, the executive director of the Electronic Privacy Information Center, a prominent digital rights nonprofit, told me last week. “They should be examining business practices. They should be examining how these firms collect and use the personal data of customers, of internet users.”
The Senate Commerce hearing, however, doesn’t seem designed to investigate commercial surveillance and influence practices that might merit government oversight.
For one thing, only industry executives are currently set to testify. And most of them are lawyers and policy experts, not engineers versed in the mechanics of data-mining algorithms.
Companies are sending their “policy and law folks to Washington to make the government go away — not the engineering folks who actually understand these systems in depth and can talk through alternatives,” Jonathan Mayer, an assistant professor of computer science and public affairs at Princeton University, told me.
That may be because Congress is under industry pressure.
California recently passed a new privacy law that would give Californians some power over the data companies’ hold on them. Industry groups hope to defang that statute by pushing Congress to pass federal privacy legislation that would overrule state laws. The industry-stacked Senate hearing lineup seems designed to pave the way for that, said Danielle Citron, a law professor at the University of Maryland.
Frederick Hill, a spokesman for the Senate Commerce Committee, said the group planned future hearings that would include other voices, such as consumer groups. But “for the first hearing,” Mr. Hill said, “the committee is bringing in companies most consumers recognize to make the discussion about privacy more relatable.”
What is at stake here isn’t privacy, the right not to be observed. It’s how companies can use our data to invisibly shunt us in directions that may benefit them more than us.
Many consumers know that digital services and ad tech companies track and analyze their activities. And they accept, or are at least resigned to, data-mining in exchange for conveniences like customized newsfeeds and ads.
But revelations about Russian election interference and Cambridge Analytica, the voter-profiling company that obtained information on millions of Facebook users, have made it clear that data-driven influence campaigns can scale quickly and cause societal harm.
And that leads to a larger question: Do we want a future in which companies can freely parse the photos we posted last year, or the location data from the fitness apps we used last week, to infer whether we are stressed or depressed or financially strapped or emotionally vulnerable — and take advantage of that?
“Say I sound sick when I am talking to Alexa, maybe they would show me medicine as a suggestion on Amazon,” said Franziska Roesner, an assistant professor of computer science at the University of Washington, using a hypothetical example of Amazon’s voice assistant. “What happens when the inferences are wrong?”
(Amazon said it does not use Alexa data for product recommendations or marketing.)
It’s tough to answer those questions right now when there are often gulfs between the innocuous ways companies explain their data practices to consumers and the details they divulge about their targeting techniques to advertisers.
Here’s what it means in practice: AT&T can find out which subscribers have indigestion — or at least which ones bought over-the-counter drugs to treat it.
In a case study for advertisers, AT&T describes segmenting DirecTV subscribers who bought antacids and then targeting them with ads for the medication. The firm was also able to track those subscribers’ spending. Households who saw the antacid ads spent 725 percent more on the drugs than a national audience.
But consumer advocates hope senators will press AT&T, Amazon and other companies this week to provide more details on their consumer-profiling practices. “We want an inside look on the analytics and how they’re categorizing, ranking, rating and scoring us,” Professor Citron said.
Given the increased public scrutiny, some companies are tweaking their tactics.
AT&T recently said it would stop sharing users’ location details with data brokers. Facebook said it had stopped allowing advertisers to use sensitive categories, like race or religion, to exclude people from seeing ads. Google created a feature for users to download masses of their data, including a list of all the sites Google has tracked them on.
Government officials in Europe are not waiting for companies to police themselves. In May, the European Union introduced a tough new data protection law that curbs some data-mining.
It requires companies to obtain explicit permission from European users before collecting personal details on sensitive subjects like their religion, health or sex life. It gives European users the right to see all of the information companies hold about them — including any algorithmic scores or inferences.
European users also have the right not to be subject to completely automated decisions that could significantly affect them, such as credit algorithms that use a person’s data to decide whether a bank should grant him or her a loan.
Of course, privacy still matters. But Congress now has an opportunity to press companies like Amazon on broader public issues. It could require them to disclose exactly how they use data extracted from consumers. And it could force companies to give consumers some rights over that data.