Have you created a ShazzleMail account on your smartphone? This is a required first step.

Yes No

Free Encrypted Email

SAN FRANCISCO - OCTOBER 24:  Dustin Moskovitz, co-founder of Facebook, delivers his keynote address at the CTIA WIRELESS I.T. & Entertainment 2007 conference October 24, 2007 in San Francisco, California. The confernence is showcasing the lastest in mobile technology and will run through October 25.  (Photo by Kimberly White/Getty Images)

Get Ready for the Next Big Privacy Backlash Against Facebook

May 22, 2017

DATA MINING IS such a prosaic part of our online lives that it’s hard to sustain consumer interest in it, much less outrage. The modern condition means constantly clicking against our better judgement. We go to bed anxious about the surveillance apparatus lurking just beneath our social media feeds, then wake up to mindlessly scroll, Like, Heart, Wow, and Fave another day.

But earlier this month, The Australian uncovered something that felt like a breach in the social contract: a leaked confidential document prepared by Facebook that revealed the company had offered advertisers the opportunity to target 6.4 million younger users, some only 14 years old, during moments of psychological vulnerability, such as when they felt “worthless,” “insecure,” “stressed,” “defeated,” “anxious,” and like a “failure.”

The 23-page document had been prepared for a potential advertiser and highlighted Facebook’s ability to micro-target ads down to “moments when young people need a confidence boost.” According to The Australian’s report, Facebook had been monitoring posts, photos, interactions, and internet activity in real time to track these emotional lows. (Facebook confirmed the existence of the report, but declined to respond to questions from WIRED about which types of posts were used to discern emotion.)

The day the story broke, Facebook quickly issued a public statement arguing that the premise of the article was “misleading” because “Facebook does not offer tools to target people based on their emotional state.” The social network also promised that the research on younger users “was never used to target ads.” The analysis on minors did not follow Facebook’s research review protocols, the company wrote, so Facebook would be “reviewing the details to correct the oversight,” implying that the analysis had not been sanctioned by headquarters in Menlo Park.

A spokesperson for Facebook tells WIRED that the research had been commissioned by an advertiser. But Facebook’s public statement did not make that clear or explain how the research on minors ended up in a presentation to potential advertisers.

The statement said only that the analysis had been conducted by “an Australian researcher.” But the leaked presentation obtained by The Australian was prepared by two Australian Facebook employees, both managers who connect Facebook to ad agencies.

Privacy advocates and social media researchers, some of whom signed a public letter to Mark Zuckerberg about the ethical implications of tracking minors, tell WIRED the leak arrived at a crucial time in their campaign for stricter guidelines around consumer surveillance. Between the political fallout of psychographic profiling on Facebook and recent fines against the social network for breaking European laws about data collection, they hope this controversy could have lasting implications on the way the $400 billion behemoth tracks sensitive data.

Welcome to the next phase of Facebook privacy backlash, where the big fear isn’t just what Facebook knows about its users but whether that knowledge can be weaponized in ways those users cannot see, and would never knowingly allow.

Dear Mark Zuckerberg
Five years ago, Facebook conducted a mass experiment in manipulating emotions on nearly 700,000 unsuspecting users. The company tweaked News Feeds to show random users more positive or negative content, to see if it made those users happy or sad. In that case, there was no leaked document, no smoking gun: The results were published openly in an academic journal in 2014. In response, there was an outcry over what seemed like social engineering; the company said it had been “unprepared for the reaction” and strengthened its research review process accordingly.

A spokesperson for Facebook tells WIRED that the research referenced in the newly surfaced document complied with Facebook’s privacy and data policies, such as anonymizing the data by removing any personally identifiable information, but it did not meet those enhanced research protocols, which are supposed to require additional review for studies of “sensitive groups,” like minors.

A week after the document was leaked, more than two dozen nonprofits from the US, Europe, Brazil, and Mexico wrote a blistering public letter to Zuckerberg arguing that Facebook should release the document because the health and ethical implications were “far too concerning to keep concealed.” Facebook has become a “powerful cultural and social force in the lives of young people,” they wrote, and the mega-corporation could not just chalk up the mistake to a deviation from its research protocols. Marketers “and others” could use this research to “take advantage of young people by tapping into unique developmental vulnerabilities for profit,” the letter warned. (WIRED reached out to The Australian’s media editor, Darren Davidson, who obtained the leaked document, to see if the paper has plans to publish it in full, but did not receive an immediate response.)

“We take the concerns raised by these organizations seriously,” a Facebook spokesperson said in response to questions from WIRED. “Last week we reached out to several of these groups to discuss the research, and together agreed to set a meeting. We look forward to working with them.”

Jeff Chester, executive director of the Center for Digital Democracy, one of the nonprofits that signed the letter, will be present at the Facebook meeting. “I’ll be interested to see how honest they are,” he tells WIRED. “Are they going to acknowledge all of the similar research that they already do? Or what it means for Facebook and Instagram users worldwide? Are they going to talk about the fact that they are continually expanding the ability of their platform to identify and track consumers on behalf of powerful advertisers?”

Chester keeps close tabs on Facebook’s increasingly sophisticated marketing capabilities, a toolkit that includes neuro-marketing and biometric research techniques that can be used to test bodily reactions to ads, like responses in the brain, heart, eye movement, or memory recall. Chester pointed to a recent report from Facebook IQ—a research division within the social network designed to help marketers—that used an EEG headset to measure social connections and feelings in virtual reality.

“When Facebook said this was aberration, we knew that was not true, because it squarely fits into what Facebook does all the time in terms of analyzing the emotional reactions of individuals,” including vulnerable groups like young people, black people, and Latinos, Chester says. “Facebook is one big sentiment-mining apparatus.”

If the users in question weren’t teenagers—or if the emotion wasn’t insecurity—Facebook’s public statement might have been sufficient; the uproar from privacy advocates may have been duly noted, then promptly forgotten.

Instead, as Kathryn Montgomery, a professor at American University and the director of the school’s communications studies division—who is married to Chester—tells WIRED, The Australian’s report served as “a flashpoint that enables you to glimpse Facebook’s inner workings, which in many ways is about monetization of moods.”

A New Advertising Age
This may sound like a lot of sturm und drang for making money off of teenage insecurity, a mass market practice that has been around since at least World War II. The entire advertising industry is, after all, premised on the ability to leverage a consumer’s emotional state. But it’s one thing to show makeup ads to people who follow Kylie Jenner on Instagram; it’s another to use computational advertising techniques to sell flat-tummy tea to 14 year olds at the exact moment they’re feeling their worst.

In fact, Montgomery and Chester have been fighting to protect young people’s digital privacy for decades. The couple helped pass the Children’s Online Privacy Protection Act (COPPA) in 1998, which restricts data collection and online marketing from targeting children under 13 years old. The legislation was created to prevent the first wave of dotcom companies from engaging in deceptive practices, such as using games and contests to collect information about children without parental permission. The same year COPPA passed, the FTC filed its first internet privacy complaint against GeoCities, for misleading both child and adult consumers about how it was using their personal information. Since then, companies big, small, and fictional have racked up fines.

For its part, Facebook has been open and cooperative in responding to concerns about minors in the past. After The Wall Street Journal reported in 2012 that Facebook was considering allowing children younger than 13 to open accounts, the company met with privacy advocates who helped convince the platform to continue barring children from the platform.

Facebook also understands that minors require additional protections. By default, it turns off location sharing for minors, and offers warnings before young people share a post publicly. Indeed, Facebook sometimes uses its tracking capabilities to safeguard users, such as newly released artificially intelligent suicide prevention tools that “help people in real time.”

“We do, of course, want to try to help people in our community who are at risk, including if their friends report to us that they may be considering self-harm, but that’s not related to the incorrect allegations that were made in The Australian’s piece,” a Facebook spokesperson tells WIRED.

Regardless, advances in ad targeting may require more default protections. Marketers want to pinpoint people in an “intimate, ongoing, interactive way,” says Chester. As people use more and more devices across different networks, companies that collect this information have amassed bank vaults of data on users’ locations, recent life events, affinity groups, or, theoretically, emotional states.

“This is the holy grail of advertising,” says Saleem Alhabash, an assistant professor at Michigan State University. A consumer has “a particular need or motivation at this particular moment in time, and you are giving them messages that feed exactly to what they’re feeling. The return on investment is huge.”

To that end, Alhabash believes companies should, for the most part, have the freedom to conduct business. “I do not think that advertising in general is manipulative.” he says. “Where it becomes manipulative is when certain parts of our personal information gets used against us to makes us crave and want things that we do not want.” (Alhabash worked on a study about how Facebook ads for alcohol can increase the desire to drink.)

Amid a swirl of recent concerns over how Facebook can influence our actions in the real world and the ways that micro-targeting can be weaponized—such as voter-suppression campaigns targeting African Americans—the leaked document seems like another sign that fears about the company have taken on a different shape.

“We’ve entered a new phase because of the controversy in promoting fake news, in disseminating hate speech, in Facebook’s outsized influence in campaigns that resulted in Brexit, the election of Trump, and other political developments,” Chester explains.

Europe Plays Hardball
Unfortunately for Facebook, the Australian ad targeting controversy cropped up just as European regulators have been cracking down on social networks, charging that they “aren’t taking complaints from their users seriously enough.” That’s the reason Germany’s justice minister cited in March when he proposed a law that would fine social media companies up to €50 million if they don’t respond quickly enough to reports of illegal content or hate speech.

This week, the focus has shifted to Facebook’s privacy violations. On Tuesday, data protection authorities (DPAs) from France, the Netherlands, Spain, Germany, and Belgium issued a joint statement detailing the results of national investigations into Facebook for privacy issues, including processing personal data for advertising purposes.

France and the Netherlands handed down what amounted to a slap on the wrist and a small fine, but this is just the preview. Europe’s strict privacy laws are about to get even stricter. It’s all part of a growing sense in the EU that it’s time to throw a bridle on Silicon Valley.

In 368 days (regulators have posted a handy countdown clock) the General Data Protection Regulation will go into effect for the European Union. Once the new rules are in place, companies will be forced to take privacy more seriously, if only because of the fines, David Martin, senior legal officer at the European Consumer Organization, tells WIRED by email. France fined Facebook €150,000 for unlawfully tracking internet users to display targeted advertising, the maximum it can currently impose. But once the new rules are in place, the fines could be as high as €20 million, or 4 percent of the company’s global revenue, whichever is higher, Martin says.

For companies like Google and Facebook, with market capitalizations in the hundreds of billions, compliance might be a bigger issue than fines. But American advocates hope that some of that momentum will be contagious, pressuring Silicon Valley’s oligarchy into creating stronger safeguards for sensitive data. Says Chester, “The feedback I got from my colleagues in Europe was, ‘Look, you guys have that letter. We have laws and rules that need to be enforced.’”

In the joint statement on Tuesday, the Dutch authorities reported that Facebook violated data protection laws for its 9.6 million users in the Netherlands by using sensitive personal data without the users’ explicit consent, including serving targeted ads based on users’ sexual preferences. Facebook changed its practices to comply, and the Dutch DPA said it will issue a sanction if it finds out the violations have not stopped.

In response to questions from WIRED about the sanctions, a different Facebook spokesman says that the company respectfully disagrees with the findings by the French and Dutch authorities. Facebook maintains that its practices have been compliant, but the spokesperson says that Facebook welcomes the dialogue.

“At Facebook, putting people in control of their privacy is at the heart of everything we do,” the spokesperson tells WIRED. “Over recent years, we’ve simplified our policies further to help people understand how we use information to make Facebook better. We’ve built teams of people who focus on the protection of privacy—from engineers to designers—and tools that give people choice and control.”

And yet the findings from the investigations don’t sound that far off from the leaked Australian document, which is partly what made the specter of preying on teen insecurity so unsettling.

It’s not a dystopian nightmare. It’s just a few clicks away from the status quo.

Tags: , , ,

whatsapp-encryption-explained

WhatsApp WARNING – Why you should NOT use this ‘new’ feature

May 17, 2017

WHATSAPP users are being warned about a new scam which suggests the popular service can now be customised with a range of colours. Here’s why you shouldn’t click on this link.
WhatsApp users across the world are being urged not to click on a new spam message which teases an option to customise the app.

The message, which was first spotted by reddit user yuexist, states that, “Now you can change your WhatsApp and leave it with your favourite colour.”

As soon as the message is received on a phone, fans of the popular app are then asked to share the message with 12 friends to activate and verify the new custom colour feature.

Once users have ‘verified’ themselves, they are then told that WhatsApp’s colors can only be used via a desktop and, to make the new feature work, they are asked to install an extension called BlackWhats from the Chrome Web Store.

However, the message and extension actually appear to be fake and could end up with users installing nasty adware along with infecting 12 of their friends at the same time.
This isn’t the first time WhatsApp users have been victims of scam messages.

Earlier this year a message was sent millions of times stating that, from today, “WhatsApp will become chargeable.”

It then went on to warn users that the only way to stop the daily fee is to forward the message to “at least 10 contacts.”

With WhatsApp revealing it will always remain free this message was clearly a hoax and users were warned not to forward it on to their friends.

Having given advice on how to avoid such hoaxes, WhatsApp suggests users don’t interact with messages that instruct you to forward the message on, those which promise rewards or any which claim to be from WhatsApp itself.

“We always advise you to block the sender, disregard the message and delete it,” the company said.
Although this latest colour changing feature might be fake there is a new WhatsApp extension that is 100 per cent real.

The Opera browsing software, has just rolled out a major update which now includes Facebook Messenger, WhatsApp and Telegram icons within the browser.

This allows users to access these apps with one simple click.

After you log in to the social sites, there are two ways of using this feature: you can open it in overlay or pin it side-by-side with your current tab. Pinning a communicator allows you to combine online chatting with a full browsing experience.

You can also decide which of the messengers to enable in your sidebar by selecting them under ‘Customise start page.’

Tags:

meitu

Kids, parents alike worried about privacy with internet-connected toys

May 11, 2017

Hello Barbie, CogniToys Dino and other toys connected to the internet can joke around with children and respond in surprising detail to questions posed by their young users. The toys record the voices of children who interact with them and store those recordings in the cloud, helping the toys become “smarter.”

As Wi-Fi-enabled toys like these compete for attention in the home, a new analysis finds that kids are unaware of their toys’ capabilities, and parents have numerous privacy concerns.

University of Washington researchers have conducted a new study that explores the attitudes and concerns of both parents and children who play with internet-connected toys. Through a series of in-depth interviews and observations, the researchers found that kids didn’t know their toys were recording their conversations, and parents generally worried about their children’s privacy when they played with the toys.
“These toys that can record and transmit are coming into a place that’s historically legally very well-protected ― the home,” said co-lead author Emily McReynolds, associate director of the UW’s Tech Policy Lab. “People have different perspectives about their own privacy, but it’s crystalized when you give a toy to a child.”

The researchers presented their paper May 10 at the CHI 2017 Conference on Human Factors in Computing Systems.

Though internet-connected toys have taken off commercially, their growth in the market has not been without security breaches and public scrutiny. VTech, a company that produces tablets for children, was storing personal data of more than 200,000 children when its database was hacked in 2015. Earlier this year, Germany banned the Cayla toy over fears that personal data could be stolen.

It’s within this landscape that the UW team sought to understand the privacy concerns and expectations kids and parents have for these types of toys.

The researchers conducted interviews with nine parent-child pairs, asking each of them questions ― ranging from whether a child liked the toy and would tell it a secret to whether a parent would buy the toy or share what their child said to it on social media.

They also observed the children, all aged 6 to 10, playing with Hello Barbie and CogniToys Dino. These toys were chosen for the study because they are among the industry leaders for their stated privacy measures. Hello Barbie, for example, has an extensive permissions process for parents when setting up the toy, and it has been complimented for its strong encryption practices.

The resulting paper highlights a wide selection of comments from kids and parents, then makes recommendations for toy designers and policymakers.
Most of the children participating in the study did not know the toys were recording their conversations. Additionally, the toys’ lifelike exteriors probably fueled the perception that they are trustworthy, the researchers said, whereas kids might not have the tendency to share secrets and personal information when communicating with similar tools not intended as toys, such as Siri and Alexa.

“The toys are a social agent where you might feel compelled to disclose things that you wouldn’t otherwise to a computer or cell phone. A toy has that social exterior which might fool you into being less secure on what you tell it,” said co-lead author Maya Cakmak, an assistant professor at the Allen School. “We have this concern for adults, and with children, they’re even more vulnerable.”

Some kids were troubled by the idea of their conversations being recorded. When one parent explained how the child’s conversation with the doll could end up being shared widely on the computer, the child responded: “That’s pretty scary.”

At minimum, toy designers should create a way for the devices to notify children when they are recording, the researchers said. Designers could consider recording notifications that are more humanlike, such as having Hello Barbie say, “I’ll remember everything you say to me” instead of a red recording light that might not make sense to a child in that context.

The study found that most parents were concerned about their child’s privacy when playing with the toys. They universally wanted parental controls such as the ability to disconnect Barbie from the internet or control the types of questions to which the toys will respond. The researchers recommend toy designers delete recordings after a week’s time, or give parents the ability to delete conversations permanently.

A recent UW study demonstrated that video recordings that are filtered to preserve privacy can still allow a tele-operated robot to perform useful tasks, such as organize objects on a table. This study also revealed that people are much less concerned about privacy ― even for sensitive items that could reveal financial or medical information ― when such filters are in place. Speech recordings on connected toys could similarly be filtered to remove identity information and encode the content of speech in less human-interpretable formats to preserve privacy, while still allowing the toy to respond intelligibly.

The researchers hope this initial look into the privacy concerns of parents and kids will continue to inform both privacy laws and toy designers, given that such devices will only continue to fill the market and home.

“It’s inevitable that kids’ toys, as with everything else in society, will have computers in them, so it’s important to design them with security measures in mind,” said co-lead author Franziska Roesner, a UW assistant professor at the Allen School. “I hope the security research community continues to study these specific user groups, like children, that we don’t necessarily study in-depth.”

Other co-authors are Sarah Hubbard and Timothy Lau of the Information School and Aditya Saraf of the Allen School of Computer Science & Engineering.

The study was funded by the Consumer Privacy Rights Fund at the Rose Foundation for Communities and the Environment and by the UW’s Tech Policy Lab.

Tags:

privacy picture a;dlsfjasd;lkfj

NSA spied on millions of US communications in 2016

May 3, 2017

The US National Security Agency (NSA) collected more than 151 million records of Americans’ phone calls last year, even after Congress limited its ability to collect bulk call records.

A report from the office of Director of National Intelligence Dan Coats presented the first measure of the effects of the 2015 USA Freedom Act, which limited the NSA to collecting the phone records and contacts of people that the US and allied intelligence agencies suspect may have ties to “terrorism”.

NSA collected the 151 million records even though it had warrants from the secret Foreign Intelligence Surveillance Court to spy on only 42 suspects in 2016, in addition to a handful identified the previous year, the report said.

READ MORE: What is electronic surveillance?

Because the 151 million would include multiple calls made to or from the same phone numbers, the number of people whose records were collected would be much smaller, US officials said. They said they had no breakdown of how many individuals’ phone records were among those collected.

Politicians have repeatedly asked US intelligence agencies to tell them how many Americans’ emails and calls are vacuumed up by warrantless government surveillance programmes.

NSA to stop citizen surveillance programme
“This report provides a small window into the government’s surveillance activities, but it leaves vital questions unanswered,” Senator Ron Wyden said in a statement. “At the top of the list is how many Americans’ communications are being swept up.”

The NSA has been gathering a vast quantity of telephone “metadata” – records of callers’ and recipients’ phone numbers and the times and durations of the calls – since the September 11, 2001, attacks.

The spy agency says it doesn’t collect the content of the communications.

US officials on Tuesday argued the 151 million records collected last year were tiny compared with the number gathered under procedures that were stopped after former NSA contractor Edward Snowden revealed the surveillance programme in 2013.

A report in 2014 suggested potentially “billions of records per day” were being collected.

“This year’s report continues our trajectory toward greater transparency, providing additional statistics beyond what is required by law,” said Timothy Barrett, spokesman for the Office of the Director of National Intelligence.

The new report came amid allegations – recently repeated by US President Donald Trump – that former president Barack Obama ordered warrantless surveillance of his communications, and former national security adviser Susan Rice asked the NSA to “unmask” the names of US people caught in the surveillance.

Both Republican and Democratic members of the congressional intelligence committees have said so far they have found no evidence to support either allegation.

The report said the names of 1,934 “US persons” were “unmasked” last year in response to specific requests, compared with 2,232 in 2015. But it did not identify who requested the names or on what grounds.

Tags: , ,

5805091c1a00002c145b9f72

OUTRAGE OVER ONLINE PRIVACY RULE CHANGE

April 18, 2017

COEUR d’ALENE — Not in the world of our internet service.

That’s a common response from local providers on a new federal law that killed an online privacy regulation and could allow internet provider giants to sell the web browsing history of their customers to advertisers.

Mike Kennedy, president of Intermax Networks, a Coeur d’Alene-based provider, vowed his company will never sell client data.

“Never, not once and we won’t,” he said. “We are stunned at the change in this law.

“We take our customers’ privacy seriously as all internet service providers should. Unfortunately, the big national companies have carried the day in Washington on this. So, in response, we are making a pledge to our customers that their browsing data is not for sale with Intermax.”

The Federal Communications Commission rule issued in October was designed to give consumers greater control over how internet service providers share information. But critics said the rule would have stifled innovation.

Within the first day of taking its firm stance to not sell client data, Kennedy said Intermax received more than 150 messages of appreciation from its customers.

“I’ve never had such a big response to any client communication before,” he said. “But most people had no real idea what was happening until we communicated our position to them. So this was both educational and a clear statement.”

Other local providers have a similar response to the change.

“With this change, the internet browsing public will lose the power to stop big internet service and telecommunications providers from invading their privacy and profiting from personal information about their lives, families and children without their express consent,” Jake Montgomery, a manager at Intechtel wrote to The Press in a statement.

“Winning the provisions that protected consumers from this exact thing was a hard-fought battle against big corporate interests. It saddens us to see these protections evaporated with nothing more than the stroke of a pen.”

Javier Mendoza, Frontier Communications spokesman, said internet users are encouraged to review their service providers’ privacy policy to ensure it’s something people are comfortable with.

“Although there have been reports that personal data may be available for sale, that is not true at Frontier,” Mendoza told The Press. “Frontier does not track or sell individual customer browsing history. Frontier remains committed to safeguarding our customers’ privacy and to transparency …”

A spokesperson for Spectrum, formerly Time Warner Cable, in Coeur d’Alene, couldn’t be reached for comment on Thursday.

Undoing the FCC regulation leaves people’s online information in a murky area. Some experts say federal law still requires broadband providers to protect customer information, but it doesn’t spell out how or what companies must do. That’s what the FCC rule aimed to do.

Kennedy said ISPs should not be preying on people’s browsing history, infringing on their privacy and selling it for financial gain.

“This bill was passed because big companies spent big money on lobbyists,” he said. “But it’s a drop in the bucket for them compared to what they think they will get by selling user data and their customers’ information.

“Some have argued that there were procedural matters to clear up between the FCC and FTC (Federal Trade Commission) as the reason for doing it. I contend there are plenty of other ways to have accomplished that without repealing the privacy rule.”

Tags: , , ,

Introducing ShazzleMail Email and How it Works

Privacy is your Fundamental Human Right.

Our Daily Blog
SAN FRANCISCO - OCTOBER 24:  Dustin Moskovitz, co-founder of Facebook, delivers his keynote address at the CTIA WIRELESS I.T. & Entertainment 2007 conference October 24, 2007 in San Francisco, California. The confernence is showcasing the lastest in mobile technology and will run through October 25.  (Photo by Kimberly White/Getty Images)
Get Ready for the Next Big Privacy Backlash Against Facebook
May 22, 2017

DATA MINING IS such a prosaic part of our online lives that it’s hard to sustain consumer interest...

Read more
whatsapp-encryption-explained
WhatsApp WARNING – Why you should NOT use this ‘new’ feature
May 17, 2017

WHATSAPP users are being warned about a new scam which suggests the popular service can now be custo...

Read more
meitu
Kids, parents alike worried about privacy with internet-connected toys
May 11, 2017

Hello Barbie, CogniToys Dino and other toys connected to the internet can joke around with children ...

Read more
privacy picture a;dlsfjasd;lkfj
NSA spied on millions of US communications in 2016
May 3, 2017

The US National Security Agency (NSA) collected more than 151 million records of Americans' phone ca...

Read more
5805091c1a00002c145b9f72
OUTRAGE OVER ONLINE PRIVACY RULE CHANGE
April 18, 2017

COEUR d'ALENE — Not in the world of our internet service. That's a common response from local p...

Read more