How well does regulation work with Facebook

Note: We have used commission links in this article and marked them with "*". If an order is placed via these links, t3n.de receives a commission.

Facebook boss Mark Zuckerberg has also spoken out in favor of state regulatory measures in the online area before the US Senate. But what could such measures look like? We take a look at applicable law and possible new regulations.

While some politicians and opinion leaders are still arguing about whether stricter rules should be imposed on companies like Facebook, the question for company boss Mark Zuckerberg has long been decided. "The Internet is becoming more and more important in people's lives and I think we have to have a conversation about which regulatory measures are the right ones - not whether there should be any or not." Zuckerberg said at a hearing before the US Senate, in which the company boss was publicly asked about the Cambridge Analytica data scandal.

However, Zuckerberg did not make any specific proposals for regulating Facebook. After all, he and his company are supporting a legislative initiative by three US senators with the “Honest Ads Act”. The proposal is supported not only by Facebook but also by Twitter and provides that it must be checked and clearly communicated who created a political advertisement. Similar regulations already exist in the USA for television and radio. Ultimately, only one obvious loophole in the law is closed.

Facebook boss Mark Zuckerberg during the hearing before the US Senate. (Photo: dpa)

But Facebook and other Internet platforms also have social effects that are not comparable in form to that of the old media. Newspapers have typically not published letters to the editor calling for the murder of politicians, and commercial television has never deliberately spread false reports in order to generate higher advertising revenue. However, both are common practice on Facebook, where there is no prior checking of the disseminated content as a matter of principle. This, in turn, has serious consequences for democracy, because how should political opinion-forming take place when facts are lost among fake news? How should one discuss when hatred and hostility take the place of arguments? And the accumulation of personal data and the resulting potential for abuse is not a phenomenon in which we can orient ourselves to regulations from a time before the Internet.

Data protection, hate messages and fake news: the state of affairs

With the introduction of the General Data Protection Regulation (GDPR), strict data protection rules already exist within the EU. According to the law, companies, including Facebook, not only have to disclose which user data they collect and how they use it, but also explain it to the user as simply and understandably as possible. The law has also significantly increased the maximum possible penalties for illegally disclosing personal data. The extent to which the GDPR is an adequate means of combating data misuse will have to be seen in the coming years.

In the fight against the spread of hate messages on Facebook and other platforms, the former Federal Minister of Justice and current Foreign Minister Heiko Maas (SPD) stood out in particular. On his initiative, the Network Enforcement Act (NetzDG) ​​was enacted on September 1, 2017. In essence, the law provides that hate messages must be deleted by the platform operators themselves. If they do not do that, severe penalties are to be threatened.

Against hatred on the net: Heiko Maas was criticized for the NetzDG from many sides. (Photo: dpa)

Even before the NetzDG came into force, there was criticism. Among others from the EU Commission, the United Nations Special Representative for the Protection of Freedom of Expression and Reporters Without Borders. Above all, it was criticized that the NetzDG defines what actually is a message of hate being far too vague. In addition, the regulation does not provide for the possibility of objecting to deletion.

The possible penalties are also not far off: six months after the NetzDG came into force, the federal government has still not been able to agree on a catalog of fines. As Netzpolitik.org reports, this is mainly due to the successful lobbying work by Facebook and Google. In summary, the German state has put the deletion of content in the hands of private companies, given citizens no opportunity to object and, for the time being, no real means of insisting on compliance with the law.

Despite the problems of the NetzDG, imitators have now also found themselves in other countries: For example, the French Prime Minister Édouard Philippe has announced that he will also introduce tougher regulations in his country to combat hateful messages on the Internet. After all, the government of the Grande Nation does not want to leave it to go it alone on a national level, but is also striving to introduce a corresponding law at the European level. It is to be hoped that in this project, care will be taken to iron out the errors of the NetzDG and to introduce a measure across the EU that adequately takes into account the many facets of the problem.

Disclosure of algorithms and other more radical suggestions on Facebook regulation

The problem with the existing laws, according to some experts, is that they only work reactively. In other words: Countermeasures are only formulated when a technology has demonstrably negative effects on coexistence. The professor of business ethics Thomas Beschorner and the professor of economics Martin Kolmar have outlined a possible solution to this problem in a guest contribution for Die Zeit.

The two professors envision a model based on the example of the pharmaceutical industry. Instead of reacting to possible negative effects in retrospect, tech companies should prove before a technology is introduced that it is useful on the one hand and harmless to the common good on the other. The proposal is definitely debatable, but it should not be forgotten that these mechanisms also cost a lot of money when drugs are introduced.

With regard to Internet companies, this reversal of the burden of proof could ultimately lead to the position of corporations such as Facebook or Google being even more cemented. Because while the large platforms can certainly bear the costs, it would be more difficult for startups to raise capital due to the higher financial risk. Apart from that, however, this approach would also, in principle, mean that technical innovations would take significantly longer to reach society. The question therefore arises as to whether we would be ready for this cut at all.

Should companies like Facebook or Google disclose their algorithms? (Photo: Nick Fox / Shutterstock)

Former Austrian Chancellor Christian Kern (SPÖ) advocated a different approach in 2017. In an interview he spoke out in favor of companies like Facebook or Google having to disclose their algorithms. This is the only way to ensure democratic control. Critics and the potentially affected companies reject this request on the grounds that precise knowledge of the algorithms would also make manipulation easier.

The argument can also be applied to open source software. Despite its open nature, however, it is no less secure than proprietary software because security holes are not only discovered faster, but can also be filled faster if more people take a look at them. Ultimately, however, disclosure does not have to go that far. Rather, there could be a government agency that checks on behalf of the general public whether the algorithms used work as the corporations claim. Heiko Maas suggested setting up such a position in 2017, for example. That would not be completely absurd, Microsoft also grants governments - strictly monitored -, for example, insight into the Windows source code.

There are still no clear approaches to regulating digital platforms such as Facebook

One thing is clear: it will no longer work in the future without regulation. If the omissions of multibillion-dollar corporations affect the lives of millions of people, then there is no way for the tech industry to avoid the introduction of control mechanisms. The only thing that remains unclear is what it should look like, because there is still a lack of really convincing approaches.

It is therefore to be hoped that the current scandal surrounding Cambridge Analytica and the data of Facebook users will lead to a broad social debate igniting and the proposals in the room, the existing legal framework and new proposals being discussed. Otherwise, what Sascha Lobo stated a year ago may still apply in two or three years: "We have to regulate digital platforms, but so far we have no idea how exactly it works."

Background: The most important answers to the Facebook data scandal.

You might be interested in that too