According to the Code, the four companies do not need to check if the content they delete is illegal or not.
Archive | Rule of law
Italy opens for arbitrary net censorship
This specific measure will have two consequences: unilateral and arbitrary censorship of content, and a huge power for and pressure on platforms to act without any form of neutral examination. The consequences are unpredictable.
EDRi: Censorship in Italy: Child protection is the excuse again »
Thoughtless and dangerous EU approach to free speech online
There is a lot of ambiguity when it comes to the EU cooperation with Facebook, Twitter, Youtube/Google and Microsoft to censor the Internet – the Joint Referral Platform.
On the one hand, it has been marketed as a tool to stop »radicalization« that could lead young people to religiously motivated violence, e.g. terrorism or joining the Islamic State in the Middle East.
On the other hand, in documents and speeches the EU is totally focused on this project to stem »illegal online hate speech«, e.g. when it comes to racism and Islamophobia.
On that account, what is deemed to be »illegal« adds to the confusion. Incitement to violence is clearly and reasonably within this definition. But when it comes to the broader definition of hate speech, laws vary between EU member states.
Recent “hate speech” investigations in European countries have been spawned by homily remarks by a Spanish Cardinal who opposed “radical feminism,” a hyperbolic hashtag tweeted by a U.K. diversity coordinator, a chant for fewer Moroccan immigrants to enter the Netherlands, comments from a reality TV star implying Scottish people have Ebola, a man who put a sign in his home window saying “Islam out of Britian,” French activists calling for boycotts of Israeli products, an anti-Semitic tweet sent to a British politician, a Facebook post referring to refugees to Germany as “scum,” and various other sorts of so-called “verbal radicalism” on social media.
A practical dilemma is that »hate« is something very subjective. Who is to define what is legitimate criticism and what is hate?
For instance, religion often has very real implications on how people are supposed to live their lives, how society should be organised and what kind of laws we should have. Clearly, you should be allowed to debate this freely in the same way that you debate politics. Yet, the tendency is that what is allowed to be said when it comes to religion is becoming ever more narrow compared to politics.
Then, you have the problem that some laws against hate speech awards some groups of people different sets of rights compared to others.
When something becomes illegal to say about someone, but not someone else – you are treating people in different ways. This is a huge democratic problem and not the way to do things under the rule of law. What this can lead to, we can learn from history.
Finally, there is the general problem that this is all about censorship, about limiting free speech. You either have free speech or you don’t. If you stop people from expressing their opinions, by definition you do not have free speech. It’s as simple as that.
/ HAX
Links:
• Euro Logic: We Must Kill Free Speech to Promote Free Speech »
• United Against Hate Speech on the Web: Where do we stand? – Speech by Commissioner Jourová at Conference with German Justice Minister Maas »
• Facebook, Twitter, Google, and Microsoft Agree to Hate-Speech Code of Conduct »
• European Commission and IT Companies announce Code of Conduct on illegal online hate speech »
• EU code of conduct on countering illegal hate speech online (PDF) »
Don’t stay silent when the EU take our civil rights away
The EU has formed an alliance with Facebook, Twitter, Youtube and Microsoft to block Internet content that aims to radicalize people – and hate speech. It is called the Joint Referral Platform.
This is, per definition, about limiting free speech. As such, this taps into democratic core issues.
The plan is to have the social networks and platforms to carry out this censorship, referring to their user terms and conditions – that more or less allows them to censor or ban anyone. They don’t have to explain their actions. There is no possibility to appeal or redress.
Naturally, this is something that civil rights organisations and Internet activists must look into, analyse and keep a close eye on. Here is an apparent possibility for the political system to restrict free speech without getting its own hands dirty, without having to deal with legislation or the judicial system.
But when European Digital Rights, EDRi, asked for information – the European Commission first stalled their request and then refused to share information.
The reason presented by the Commission is notable. It is said that openness could undermine a highly sensitive on-going process. No shit, Sherlock.
The entire point is that this is highly sensitive. It’s about a public-private partnership to limit free speech. That is why transparency is of immense importance.
To make things even worse, the Commission seems to be unwilling to provide information about the legal basis for the Joint Referral Platform.
This is not how to conduct things in a democratic society.
Sadly, this is typical for how the EU apparatus works. Democratic principles and core values are brushed aside. Rule of law is disregarded. Human and civil rights are ignored.
And they usually get away with it.
This time, it’s about free speech online. Regardless of what people think of limiting what can be said on the Internet – everyone ought to agree that limitations of fundamental rights must be handled with extreme care and in an open, democratic process.
We must try to get the European Parliament to look into this. The MEP:s are democratically elected – and are, as such, at least somewhat uncomfortable with ignoring strong and loud public opinion.
This might also be a case for the European Court of Justice as well as the European Court of Human Rights.
You simply cannot stay silent when they take our civil rights away.
/ HAX
EDRi: Joint Referral Platform: no proof of diligent approach to terrorism »
EDRi vs. the EU on internet censorship
First, since the Joint Referral Platform has not been “launched”, the Commission argued it did not possess some of the information we asked for. It recognised, however, that it holds relevant documents. Yet, the Commission did not publish any documents because this can undermine public security, commercial interests of the internet industry involved, “jeopardise the protection of integrity of their managers” and undermine a “highly sensitive on-going process”. We would tend to agree that privatising criminal enforcement and putting it in the hands of – generally foreign – internet companies would undermine public security, although this may not be what the Commission meant.
Second, when we asked the Commission about the goals and (legal) principles under which this Joint Referral Platform will be launched, the Commission omitted any information about the legal basis for this. It solely restated the wording of the the Communication of 20 April 2016.
EDRi: Joint Referral Platform: no proof of diligent approach to terrorism »
EDRi: Three steps to end freedom of expression
It is quite clear that removal of material online is a restriction on fundamental rights. It is quite clear that the safeguards in the Charter of Fundamental Rights of the EU are being willfully ignored:
EU Charter: Article 52.1:
Any limitation on the exercise of the rights and freedoms recognised by this Charter must be provided for by law and respect the essence of those rights and freedoms. Subject to the principle of proportionality, limitations may be made only if they are necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others.
Government using private sector censorship for political objectives
Censorship is censorship. If you block someone from speaking freely or delete people’s content from the Internet you do censor them.
But there are different sorts of censorship.
One is when the government silences opposition, controversial voices or whatever. That is, in general terms, a violation of freedom of speech and our civil rights. That should not be accepted in a democratic society.
Another form of censorship is when Twitter censors Milo Yiannopolous, when Google censor artist Dennis Cooper or when Facebook is accused of downgrading news depending on political affiliations.
These are private companies and they choose to whom they want to provide their services. This is clearly stated in these companies voluminous terms and conditions.
So, OK – social media giants can censor people (and ideas). But should they?
The fact that Google, Youtube, Facebook and Twitter can censor people in a legally »correct« way in no way should protect them from being criticized for doing so.
And they should be criticized! Especially as their dominance on the social media scene is almost total. Their actions have political consequences. And they might very well have a political agenda.
(As a libertarian I run into this issue a lot. Just because I dislike something, I do not have the desire or right to outlaw it. But still, as a consumer, user or concerned citizen I am free to criticize e.g. censorship – and to loudly point out its risks and problems.)
But recently the lines are getting blurred. As I have pointed out in previous blog posts, governments (most recently the EU) are teaming up with major social media players to use the latter’s legal framework to silence voices that politicians dislike. Thus circumventing the legal system and the rule of law – and moving government censorship out of democratic control.
This is a serious, mounting problem.
/ HAX
Will the banning of @nero mark the »Peak Twitter« moment?
Twitter banning Milo Yiannopolous is a story with interesting dimensions.
Yiannopolous is very entertaining. He’s got some points. And he often provokes some interesting reactions.
Yiannopolous also is a loudmouth and a troll. He doesn’t really give a shit. And sometimes his opinions are rather disturbing.
The banning might very well have marked a »Peak Twitter« moment.
The party is over. I think this might cause immense damage to Twitters image and trademark. Twitter just isn’t as exciting anymore.
One interesting point of view is that this is not about free speech. Twitter is a private company. We have all agreed to their terms & conditions. Twitter can do whatever they want.
But this can, and should not shield Twitter from criticism. As a Twitter user, I am very disgruntled over the banning of @nero.
And this might actually be about free speech after all. Didn’t the EU just agree with Facebook, Twitter and Youtube to remove »radicalising« and »hateful« content? And isn’t that just a way to circumvent the rule of law when it comes to freedom of speech?
It’s just like when US authorities couldn’t find any legal ways to stop Wikileaks. So they got Paypal, Master Card, and the banks to cut off the funding. Extrajudicial proceedings, indeed.
Then, again, this affair might stimulate and accelerate the development of new social media platforms that are distributed, decentralised and impossible to censor.
Or the opposite – people moving to closed forums for the like-minded.
But Twitter as a »safe space«? That sounds boring.
/ HAX
• Twitter’s Stalinist Unpersoning of Gay Provocateur Milo Yiannopolous »
• I’m With The Banned »
ECJ Advocate General on data retention: Strict conditions must apply
Data retention (collection of data about everybody’s phone calls, text messages, e-mails, internet connections and mobile positions) may only be used to combat serious crimes – and only if there are no other options (such as using surveillance only against people who are actually suspected of criminal activities).
This is the essence of the European Court of Justices Advocate Generals recommendation in some ongoing cases about data retention.
From the press release (PDF):
The Advocate General is of the opinion that a general obligation to retain data may be compatible with EU law. The action by Member States against the possibility of imposing such an obligation is, however, subject to satisfying strict requirements. It is for the national courts to determine, in the light of all the relevant characteristics of the national regimes, whether those requirements are satisfied.
First, the general obligation to retain data and the accompanying guarantees must be laid down by legislative or regulatory measures possessing the characteristics of accessibility, foreseeability and adequate protection against arbitrary interference.
Secondly, the obligation must respect the essence of the right to respect for private life and the right to the protection of personal data laid down by the Charter.
Thirdly, the Advocate General notes that EU law requires that any interference with the fundamental rights should be in the pursuit of an objective in the general interest. He considers that solely the fight against serious crime is an objective in the general interest that is capable of justifying a general obligation to retain data, whereas combating ordinary offences and the smooth conduct of proceedings other than criminal proceedings are not.
Fourthly, the general obligation to retain data must be strictly necessary to the fight against serious crime, which means that no other measure or combination of measures could be as effective while at the same time interfering to a lesser extent with fundamental rights.
Furthermore, the Advocate General points out that that obligation must respect the conditions set out in the judgment in Digital Rights Ireland (5) as regards access to the data, the period of retention and the protection and security of the data, in order to limit the interference with the fundamental rights to what is strictly necessary.
Finally, the general obligation to retain data must be proportionate, within a democratic society, to the objective of the fight against serious crime, which means that the serious risks engendered by that obligation within a democratic society must not be disproportionate to the advantages it offers in the fight against serious crime.
Here it is important to remember that the ECJ revoked the EU Data Retention Directive – the document all member states data retention is built upon – in the spring of 2014. This because it violates fundamental human rights, such as the right to privacy. So it is hardly possible to stick to any direct adaptations of the fallen directive.
One thing that seems to be clear is that data retention cannot be used to investigate minor crimes (e.g. illegal file sharing). And it cannot be used for non-criminal proceedings (e.g. by local councils and tax authorities). The infringement of privacy is massive with data retention. It must be in proportion to the seriousness of the suspected crime.
Point four (“which means that no other measure or combination of measures could be as effective while at the same time interfering to a lesser extent with fundamental rights”) is also interesting. Of course, there are other measures – like only using surveillance against people suspected of criminal activities, instead of the entire population.
Later this fall the ECJ will give its final verdict. But it usually follows the Advocate Generals recommendations.
Links:
• ECJ press release (PDF) »
• The Advocate Generals recommendation, full text »
• EDRi – European Court confirms: Strict safeguards essential for data retention »
• Falkvinge – European Supreme Court says “Maybe” to mass surveillance of innocents »
And now… automated web censorship
Automated systems to identify child abuse material (and flag it for removal) on the Internet is now going to be used to combat “extremist” and “hateful” content on social media.
“However, the definition of “extremist content” is everything but clear; CEP’s algorithm does not (and logically cannot) contain this definition either. Even if it were to use a database of previously identified material, that still would create problems for legitimate quotation, research and illustration purposes, as well as problems regarding varying laws from one jurisdiction to another.”
“The Joint Referral Platform has the potential to automate Europol’s not-formal-censorship activities by an automatic detection of re-upload. However, it remains unclear whether any investigative measures will be taken apart from the referral – particularly as Europol’s activities, bizarrely, do not deal with illegal material. There is obviously no redress available for incorrectly identified and deleted content, as it is not the law but broad and unpredictable terms of service that are being used.”
What could possibly go wrong..?