On June 16, 2022 34 stakeholders, operating in very different sectors, presented a new European code of practice against disinformation. These are large social platforms and advertising companies, but also factcheckers and civil rights defenders in the digital sector (here the list). This is in some way a first effect, even anticipated, of the forthcoming entry into force of the Digital Service Act (“DSA”), the new European Internet legislation that, inter alia, makes online businesses more responsible vis-à-vis disinformation. And in fact the new Code represents a real change of pace compared to the predecessor code of 2018, passing from the logic of self-regulation to a real co-regulation.
This paradigm shift appears fundamental to effectively counteract the phenomenon of online disinformation which appeared worrying in Europe starting form the Cambridge Analytica case and the consequent impact on both US Presidential and Brexit polls, and whose distorting effects are now extended due to the pandemic crisis and subsequently with the Russian invasion of Ukraine.
The predecessor: the 2018 Disinformation Code
To better understand this evolution from self-regulation to co-regulation, it is necessary to recall the process that preceded the new code just approved. The first Code of conduct against disinformation, of 2018, was an important innovation as a model of voluntary commitment by platforms to adopt a whole series of measures to contain the phenomenon; nevertheless, such code was disappointing in terms of vagueness of the obligations assumed by the platforms themselves and the almost complete absence of criteria for the verifiability and measurability of the commitments. It was indeed an exercise of mere self-regulation which did not provide the desired effects.
Subsequently came the DSA (which was agreed in April 2022 and will soon come into force) which, by modernizing the rules of the Internet in Europe (in part by re-writing the Directive on electronic commerce 2000/31), makes codes of conduct, including that against disinformation, a privileged instrument of co-regulation: in practice, the voluntary measures of the enterprises concur, under the supervision of the supervisory authorities, to create the applicable discipline in the sector.
It should not be forgotten that the fight against disinformation is one of the main pillars of the DSA, foreseeing obligations for very large online platforms and search engines to prevent abuse of their systems by taking risk-based action, including oversight through independent audits of their risk management measures. Platforms must mitigate against risks such as disinformation or election manipulation, cyber violence against women, or harms to minors online. These measures must be carefully balanced against restrictions of freedom of expression, and are subject to independent audits.
The new Code of practice on disinformation
As regards disinformation, it was therefore decided to adopt a new code that could fill the gaps of the previous one and be a much more effective tool to counter a phenomenon that, in the meantime, had become of absolute gravity.
Thus, the new Code contains 44 commitments and 128 measures (a major increase with respect to the 21 foreseen in the previous 2018 version). This set of remedies is deemed to be complementary to DSA in the area of content moderation, meaning that very large online platforms subject to DSA can sign and comply with it in order to meet the systemic risk mitigation obligations imposed by the DSA’s content moderation rules. However, while the DSA and its penalties for non-compliance only apply to the largest online platforms, the Code is open to a wider range of organisations, including smaller tech companies, NGOs and research organisations ecc.
The most important aspects of the new Code are the following:
1. Raising the level of safety of the online spaces (new “digital agora” as mentioned by colleague Oreste Pollicino), hosted by the giants of the web, against disinformation techniques, procedures and strategies;
2. Strengthening the position of users through new tools that are able both to identify false information more easily, and to mitigate the risk of polluting the debate;
3. Guaranteeing access to researchers, in accordance with what provided for by the GDPR, to the data necessary to be able to conduct research on disinformation processes;
4. Addressing the issue of monetising from disinformation, political ads and manipulative techniques, by providing commitments, measures and measurement criteria for an effective “demonetization” of disinformation propagators, that is to say everything possible to prevent fake news professionals from having an economic advantage to draw from, especially with regard to advertising revenues;
5. Guaranteeing a constant dialogue between platforms and factcheckers (the latter being entitled to fair remuneration);
6. Identifying that type of online advertising that has not only commercial purposes, but also, in a broad sense, political
7. Providing safe design and algorithmic accountability measures and report on them breaking down per language or member state, a fundamental point since large online platforms have been often criticised for focusing their content moderation efforts only on the main countries and languages.
In other words, the new Code certainly presents a strong step forward compared to previous experiences, also in light of the co-regulatory mechanism described above, with an innovative potential that could provide a European response that is finally adequate to the phenomenon of disinformation, but provided that that the listed commitments are made effective, otherwise the DSA could give rise to new regulatory interventions or even sanctions. In fact, under the DSA the Commission has the same supervisory powers as it has under current European anti-trust rules, including investigatory powers and the ability to impose fines of up to 6% of global revenue.
Categories: Disinformation, fake news