A newly discovered patent application shows Facebook has come up with plans to potentially spy on its users through their phone or laptop cameras—even when they’re not turned on. This could allow it to send tailored advertisements to its nearly two billion members. The application, filed in 2014, says Facebook has thought of using “imaging components,” like a camera, to read the emotions of its users and send them catered content, like videos, photos, and ads.
Facebook has presented a function for generating »heatmaps« of users at e.g. natural disasters. Techcrunch explains:
A new initiative from Facebook will provide aid organizations with location data for users in affected areas, such as where people are marking themselves safe and from where they are fleeing. It shows the immense potential of this kind of fine-grained tracking, but inescapably resurfaces questions of just what else the company could do with the data.
Naturally, it is a good thing if Facebooks collected data can be used for saving lives.
But you should remember that this sort of technology also can be used for surveillance and that similar data can be sold for commercial purposes, without your explicit consent.
From Facebooks content moderation guidelines:
We aim to allow as much speech as possible but draw the line at content that could credibly cause real world harm. People commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways.
We aim to disrupt potential real world harm caused from people inciting or coordinating harm to other people or property by requiring certain details to be present in order to consider the treat credible. In our experience, it’s this detail that helps establish that a threat is more likely to occur.
Ars Technica: Facebook content moderation guidelines leaked »
Facebook is having a hard time lately amid claims of fake news, political bias and sexism. The European Union considered legislation to encourage a more unified response to such postings and Germany supports fines for social networks that ignore hate speech. Similarly, today an Austrian appeals court ruled that Facebook must delete hate postings written about the leader of the country’s Green party — and not just in Austria.
The original case was filed by the Austrian political party last December around posts written by a fake profile that called MP Eva Glawischnig a “rotten traitor” and a “corrupt tramp.” The Green party alleges that Facebook had not removed the posts after several requests to do so.
Rotten traitor and corrupt tramp… Are such statements really across the red line nowadays?
Our study of search and politics in seven nations – which surveyed the United States, Britain, France, Germany, Italy, Poland and Spain in January 2017 – found these concerns to be overstated, if not wrong. In fact, many internet users trust search to help them find the best information, check other sources and discover new information in ways that can burst filter bubbles and open echo chambers. (…)
We found that the fears surrounding search algorithms and social media are not irrelevant – there are problems for some users some of the time. However, they are exaggerated, creating unwarranted fears that could lead to inappropriate responses by users, regulators and policymakers.
The Conversation » Fake news, echo chambers and filter bubbles: Underresearched and overhyped »
In Switzerland, the first trial based on likes on Facebook is currently underway. According to The Local, a 45-year-old from Zurich has been charged with defamation for liking Facebook posts that accused the plaintiff of being anti-Semitic. So basically, a man is being prosecuted for liking something somebody else posted.
The Local.ch: Man faces court for ‘liking’ Facebook posts »
Pakistan has asked Facebook and Twitter to help identify Pakistanis suspected of blasphemy so it can prosecute them or pursue their extradition.
Under the country’s strict blasphemy laws, anyone found to have insulted Islam or the prophet Muhammad can be sentenced to death.
Ten days ago, Facebook CEO Mark Zuckerberg wrote a very long policy letter, that has been nagging my mind ever since. (Link»)
The ambition is – of course – to make Facebook even bigger and more important in our lives. This also means making the totally dominant social media player even bigger and more important in our lives. I’m not sure that I’m comfortable with that.
Facebook is a very special sort of social engineering, an invisible force guiding us trough social relations, news, politics, community activities, business, and culture. And here I get the impression that Facebook would like to become the curator of our lives.
Going forward, we will measure Facebook’s progress with groups based on meaningful groups, not groups overall. This will require not only helping people connect with existing meaningful groups, but also enabling community leaders to create more meaningful groups for people to connect with.
So, some Facebook groups are to be more important than others? One factor to define a »meningful group« seems to be »real« off-line events. For me, who am a small player in an international network promoting a free and open internet along with civil rights and liberty, this is a disheartening approach. Almost all our work is done online, with the occasional international conference. Nevertheless, together we make a difference – and our work is often the only way to make a real impact when it comes to politics and law making in these fields. Should we matter less?
And what about this:
The guiding principles are that the Community Standards should reflect the cultural norms of our community, that each person should see as little objectionable content as possible, and each person should be able to share what they want while being told they cannot share something as little as possible. The approach is to combine creating a large-scale democratic process to determine standards with AI to help enforce them.
For those who don’t make a decision, the default will be whatever the majority of people in your region selected, like a referendum. Ofcourse you will always be free to update your personal settings anytime.
I see the point. But wouldn’t this be creating new »filter bubbles« based on geography and cultural traditions? Will this not hamper human intellectual evolution? Will this not contribute to conformity? Isn’t the beauty of the Internet that it is truly global? Ars Technica dubs the approach outlined by Zuckerberg being »gerrymandering the Internet«.
I would also say that this would be a way to subordinate the individual to majority rule by default settings. Thus, reducing freedom and moving power away from the private person to a faceless collective. And – are we really comfortable with AI handling such delicate matters?
Wouldn’t this be a dream for totalitarian regimes – to be able to single out the ones who have changed their settings in ways that are no longer in line with most other people?
Ars Technica makes another valid point:
Zuckerberg adds that he’s thinking of creating “worldwide voting system” for Facebook users which could then be used as a template for how “collective decision-making may work in other aspects of the global community.” That’s a vague formulation. But coming on the heels of his comments about politicians with Facebook engagement, he sounds like he’s floating the idea of turning Facebook into the infrastructure for managing elections.
Putting our democratic system in the hands of Facebook? Really? I don’t think so.
And don’t forget to put the Zuckerberg manifesto in context. This is a company who has the creator of a very powerful tool for mass surveillance analysis (used by e.g. the FBI, CIA, NSA and GCHQ) – who also happens to be an advisor to the illustrious US President – on its’ board of directors.
I fully understand that running an operation like Facebook is a highly complicated and delicate task. Maybe even impossible.
But the real answer must be competition. Not that many years ago Facebook didn’t even exist. And in an unknown number of years ahead there will be something else – or, I hope, a multitude of alternatives. That is hopeful. But it doesn’t exclude Facebook from scrutiny right now, right here.
We simply do not want the Skynet experience.
• The Zuckerberg manifesto »
• Ars Technica (Op-ed): Mark Zuckerberg’s manifesto is a political trainwreck »
In the demo, Palantir engineers showed how their software could be used to identify Wikipedia users who belonged to a fictional radical religious sect and graph their social relationships. In Palantir’s pitch, its approach to the VAST Challenge involved using software to enable “many analysts working together [to] truly leverage their collective mind.” The fake scenario’s target, a cartoonishly sinister religious sect called “the Paraiso Movement,” was suspected of a terrorist bombing, but the unmentioned and obvious subtext of the experiment was the fact that such techniques could be applied to de-anonymize and track members of any political or ideological group.
The Intercept describes the (partly CIA financed) Palantir mass surveillance analysis software.
As if the above is not chilling enough, consider that Palantir owner Peter Thiel has become an advisor to President Trump and is on the board of directors at Facebook.