UPDATE 5 February 2019: Germans and French delegations found a kind of agreement reported in a new compromises proposal by the Romanian presidency, whereby the exemption in favor of SMEs is eliminated while a soft-regime is granted to micro-entreprises (with a turnover less of 10 million Euro). The latest draft of art. 13 is quite messy and will make very difficult life for videosharing platforms, unless they install overblocking filters. Censorship seems to be the most convenient and practical solution for operators wishing to avoid legal issues with rightsholders.
The European copyright reform is currently delayed because of disagreements amongst Member States about some crucial aspects of the new legislative text. The Rumanian presidency failed in proposing an acceptable compromise and its mandate request, which would have permitted to the Council to rapidly close the negotiations with Parliament and Commission, was rejected on January 18, 2019.
The apparent stallo between France and Germany
Apparently, the main stallo situation is about a potential exemption clause in art. 13 (the upload filters provision) for small enterprises: Germany defends the exemption, while France opposes. This is why Germany and France are reported to talk together and bilaterally, in order to find an agreement and speed up the end of the legislative process. The timing is crucial: the stakeholders supporting the reform (commercial broadcasters as well as traditional media and publishers) fear that the new European Parliament (to be elected in May 2019) may be less favourable to their interests, therefore they hope that everything should be finalised within the current legislature. To be fully sure to complete the legislative process without interferences, the plenary session of the European Parliament should approve the copyright reform by March 14, 2019.
The disagreement about the “small enterprise exemption for upload filters” seems to be a minimal question with respect to the ambit of application of the entire copyright reform. Therefore, it is more reasonable to believe that France and Germany are negotiating, with the support of the Commission, in order to define all pending details and getting a final agreement. The rumours about the matter to be escalating to Macron and Merkel are likely instrumental to this effect: to submit to the others a plan which is final and not negotiable any longer. This byzanthine behaviour may be disliked by other Member States, however this is the way things often happen in Brussels. The weight of a joint French-German position may be so relevant to overcome whatever blocking minority in the Council.
What the impact for publishers?
If the above is true, and provided that legislators act rapidly, the copyright reforms may be approved before the “Copyright Barbarians” will conquest the next parliament. What could be the possible impact for the publishing industry?
Despite official declarations, most stakeholders (including publishers) are skeptical about the concrete outcome of this reform, whether or not it could provide a (at least monetary) solution for the crises of traditional media. At the current situation, it seems more a matter of principle and a “contentino”, namely a cake for a kind.
In this respect, on should consider whether the ancillary right provided by art. 11 of the new Directive could be waived by publishers (as it happened in Germany with the 2013 local legislation) or not (as it happened, instead, in 2014 in Spain). The European proposal has been always ambiguous to this point, although the current Council’s text seems to favour the German solution: “Nothing in this Directive should be interpreted as preventing holders of exclusive rights under Union copyright law from authorising the use of their works or other subject-matter for free, including through free licences, when they consider it appropriate” (recital 43b). To undermine this result, the Parliament proposed a different wording: “the listing in a search engine should not be considered as fair and proportionate remuneration”.
The German scenario (i.e.: publishers may waive the ancillary copyright)
In case of a “German scenario” publishers may waive the compensation from Google as well as other news aggregators (and, in general, from whoever should pay the fee because displaying an article or a simple snippet, entirely or via a hyperlink). It is well-known that small and innovative online publishers are against this reform and therefore will be more than happy to make the waive. But what would happen with the big publishers which have been supporting this reform? When the German copyright law was enacted, most publishers accepted the waive with Google but then some big of them filed a recourse with the antitrust authority for abuse of dominant position. Would this happen also with the European legislation and with which chances? Google is super-dominant in many markets, but not as Google News in the market of news aggregations, since this market is evolving and other platforms and communicating tools, including Facebook and Whatsapp, play an important role. To better attack Google and find it dominant (and potentially abusive), one should address the entire online advertising market (where also Facebook operates, however). This move, however, would weaken the competition case for the ancillary copyright, which is much more specific than online advertising. A competition assessments is always based on valid economics, not on lobbying declarations, and therefore there might be the risk that a competition authority states that a non-dominant Google News has a legitimate interests in offering the news aggregation service for free because it gains from it much less than what the publishers do. This is confirmed by a consistent perception that publishers need news aggregation and digital sharing irrespective from a potential remuneration via an ancillary copyright.
The Spanish scenario (i.e.: publishers cannot waive)
In case of Spanish scenario, publishers would not be entitled to waive their rights vis-à-vis Google and whoever. This is a complex scenario because a mandatory licensing system would be complicated to set-up and, in any case, the big fishes, Google in primis, still would have the choice between the nuclear option (closing the business limited to news aggregation) or grant the minimum. Fact is, it is doubtful whether the economics of the online news market may compensate publishers for what they have lost during the years. As mentioned by Quintarelli, the margin of Facebook in Europe per user is 1,3 Euro/month (with ARPU at 2,4 Euro), while for Twitter is only 10 Eurocent/month/. Margins may be higher for Google but, as mentioned above, the case is about Google News, not the entire’s Google business. In other words, there is no decent pie to share amongst the hundreds European traditional publishers.
The Spanish scenario may be further complicated by third parties setting the fee on the basis of various criteria, with subsequent appeals by opposing operators. The practical enforcement of the compensation will be delegated to the national authorities and the directive does not say very much about. The legal uncertainty resulting from that may likely undermine any potential gains.
The impact for small publishers and users
The above uncertain scenarios will be mostly problematic for small and innovative online publishers providing quality and local content. For them the Spanish scenario would be chaotic since they would not be able to set up decent licensing system with the entire market because they do not have sufficient resources to do it. Such operators will certainly be in favour of the German scenario (i.e.: the ancillary copyright may be waived). By contrast, poor informative or low-quality publishers may be interested with the Spanish system because their business is based on sharing whatever content and rapidly, from fake news to kittens.
The uncertainty will be detrimental for the Internet in general, in particular because the foreseen complex regulation of hyperlinks and snippets, with various conditions, carve-out and exemptions attached. To make an example, the text agreed sofar by the Council (recital 34) states that: “The rights granted to the publishers of press publications should not ….. extend to the mere facts reported in the press publications”. The need to rule this clear, self-evident principle, which is fundamental for the freedom of speech, is an explanatory evidence of potential disturbing consequences deriving from the regulation of hyperlinks and attached statements (the so-called snippets).
The current drafts of art. 11 (the so-called link-tax) keep uncertain the status for the hyperlinks. Article 11 (press publishers’ rights). The text of the Romanian Presidency insists with the quantitative criterion to exclude snippets, while other delegation would prefer a qualitative criterion. The application of a quantitative criterio may look like simpler in theory, however it will end up with arbitrary results: an hyperlink will be subject to a fee depending of the number of words attached to it. In addition, there is no clear carve-out for individual users (bloggers and so on) and micro-enterprises. An appropriate exemption is foreseen by the Parliament:
“the rights ……. shall not prevent legitimate private and non-commercial use of press publications by individual users”
however its practical implementation is doubtful due to uncertain border between personal commercial use in the Internet, because of adverttising, terms and conditions of blog services as well as professional interests mixed with private uses.
What next? Germans and French should soon let us know what they have decided under the support of the Commission. We understand that next COREPER I meeting is scheduled on February 8, while the Trilogue could take place on February 11. The last plenary session of the Parliament is scheduled in April 2019 but, for technical reasons, it would be advisable for the copyright legislator to close the file in March 2019.
Mobile consolidation within domestic markets has always been in the European telcos’ wish list. However, in the last 5 years this ambition hardly clashed with the European Commission’s approach, in the persons of Margareth Vestager, heading the Competition directorate (“DG COMP”) of the Commission, who treated the subject with severity during her mandate. Vestager opened the doors to panEuropean mobile mergers, but showed to be very strict towards domestic mergers (unlike her predecessors Almunia, who cleared 3 domestic mergers, such as in Germany, Austria and Ireland). To achieve this scope, Vestager made national mobile mergers (unlike transnational) subject to a remedy of divestment of spectrum and other mobile resources, in favor of a new entrant mobile operator, so as to maintain at least 4 network operators in the national market and avoid the reduction to only 3. This “rule” was consistently applied in important merger operations in Denmark, the United Kingdom and Italy, despite mobile operators claiming (in vain) that consolidation was necessary to support investments, especially for 5G. DG COMP offices never bought this argument, and they may have some reason, considering the high sums paid by mobile operators in 5G auctions, irrespective of consolidation. However, this approach of the Commission may also be criticized because it supports competition merely on pricing arguments, while the market needs to move towards more innovation and diversification.
But a few weeks ago, on 27 November 2018, the European Commission authorized, with some surprise, a domestic mobile merger happening in the Netherlands: it was about the purchase of the small Tele2 (5% of market share) by the third Dutch operator T-Online (20% of market share), resulting in the reduction of mobile network operators in the Netherlands from 4 to 3. One wonders if this decision constitutes a precedent for starting a new approach in the merger policy of the European Commission, or whether it is an accident. At the moment this is not known, because European Commission’s officials are reported to exclude a revirement from the merger practice developed so far, while the Dutch case should be regarded as the result of exceptional circumstances. This approach, however, is misleading because the Commission did not provide any convincing arguments that the Dutch market conditions were so different from other cases dealt by the Vestager’s offices with much more severity. Therefore, this new case, rather than providing new guidelines for the future, only provokes uncertainties.
The merger between Tele2 and T-Online in the Netherlands led to the creation of a JV owning a consolidated market share of 25%, with the loss of the smallest and most competitive operator in the market (Tele2). One could believe that this scenario would strengthen competition, as the incumbents KPN and Vodafone / Ziggo would have now to face a stronger, consolidated third competitor (and no one else). However, the Commission’s competition practice developed so far (at least before this case) sounds different: the presence of a small group of operators (only 3) with similar market shares (between 25 and 35%) does not clearly help competition, while it may provoke the opposite effect, in particular the alignment of commercial practices – a normal behavior in small oligopolies. The traditional mantra of the Commission has normally been that competition is surely created by a Maverick operator, that is to say a small, hungry challenger who needs to be aggressive (with low prices but not only) to attract new customers and use efficiently its network (which otherwise would be empty). In the Dutch case the classic Maverick operator was Tele2, which with only 5% of the market had no chance but being very aggressive. However, thanks to the authorization of the Commission, Tele2 will now disappear, while it is not clear which incentive should bring the new JV to continue to be so aggressive, since with a consolidated market share of 25% it could find more convenient to align its pricing policy with the rest of the Dutch market. In previous cases, such as Italy and the UK, the Commission’s strict approach had led to the imposition of the entry of a Maverick operator, in these cases the French Iliad, which then actually entered Italy (but not in the UK). The entry of Iliad in Italy has caused a dramatic fall of domestic prices. One would correctly question why the entry of Iliad (or whatever low-cost operator) was imposed for Italy, and not for the Netherlands.
The reason why the Maverick operator’s rule was disregarded by DG COMP in the Netherlands is unknown. The Dutch Tele2 was financially weak and poorly developed, it only had the 4G network and for the rest depended on national roaming agreements. It was a kind a super-MVNO, like Iliad in Italy. However, being small and week has never been an argument to decrease competition. By contrast, as previously said, Tele2 was the only operator in the Netherlands in real need to be aggressive to acquire new customers and market shares. Fact is, it was the only one to have offered an unlimited data plan. This type of competition will now disappear, although the new JV has offered to continue to maintain this aggressive offer: but it is unilateral commitment, not a remedy, which could be withdrawn any time.
No surprise that, at the news of the T-Online/Tele2 merger’s authorization, KPN’s shares jumped upwards, demonstrating that the disappearance of the Maverick operator in the Dutch market will weak competition and allow prices to rise.
If the European Commission had argued and explained that the time had come for a change of direction, that’s is to say: “welcome domestic mobile mergers”, then everyone would have understood. There are many reasons to support the idea that domestic mobile mergers make more sense than cross-borders ones. Fact is, the latest European regulatory developments suggest that the mobile market is and will remain marked by national borders: spectrum policy remains firmly in the hands of national authorities; the new European code strengthens the powers of national authorities to decide national cases on the basis of local specifications; wholesale caps roaming are well above retail domestic prices, thus preventing permanent roaming. In other words, European regulation has done very little to support a business case for pan-europeans mobile mergers, unlike domestic ones, therefore one should wonder why this scope should be boosted by a single leg of the European Commission (DG COMP), while the other legs are rowing against.
On the contrary, the DG COMP has made a striking exception to the merger practice implemented so far, but at the same time denying that something may have changed. In this way, the doubt remains that the Dutch case is a more political* rather than a market decision, and that there are no clear rules for the future. Belgian authorities are planning to open the market to a 4th mobile operators, while in other countries (France, Spain) there are reflections about consolidating down to 3. What lesson to be learned from this Dutch case? Boh.
*I would not be surprised to learn, one day, that this decision may have been taken at pure political level, with some senior officials dissenting or even opposing it. A similar situation likely happened during the Almunia’s term (2009-2014) when other 4 to 3 mobile mergers were authorized, despite the likely contrary opinion of the offices. However, there are no official evidence for that, since the eventual dissenting opinion of the offices cannot be reported in official minutes or drafts. We have to stay with doubts and suspects, made more inconvenient by the fact that the beneficiary of this strange decision is the German incumbent Deutsche Telekom.
Attenzione! …. for those who perform acts of piracy from the Internet connection at home. The fact of residing in a home with other roommates, all potentially capable of using the Internet, will not be enough to exempt themselves from liability by claiming, as a defense, that offender could be someone else. The Court of Justice of the European Union has ruled that there must be a way to allow the a copyright holder to defend his interests in case of violation perpetrated through shared Internet connections.
It may seem like a common sense solution but, in the case originated from Bavaria, the German rules on the protection of family life did not allow further investigations within the family unit that used the offending Internet connection. In fact, a German publisher, Bastei Lübbe AG, had sued a Bavarian citizen, Mr. Strotzer, because through the latter’s Internet connection had been downloaded, and subsequently shared on a peer-to-peer platform, a file containing an audiobook of the publisher. Mr. Strotzer defended himself by denying having infringed the copyright of the publisher and stating, moreover, that his parents, with him cohabiting, had equal access to the connection, without however providing further clarification on the possible use that the parents themselves would have made of the Internet connection. The Bavarian court of first instance rejected the application of Bastei Lübbe, considering that the fundamental right for the protection of family life prevailed in this case. However, the appeal judge felt differently and asked the European court whether such a defense may be sufficient to exclude the responsibility of the holder of the Internet connection.
The European court simply stated that right holders must have an effective form of redress or tools to enable the competent courts to order the disclosure of the necessary information. It is therefore not a question of weakening the fundamental right to privacy, but rather a signal sent to the German authorities to provide all the necessary instruments to ensure that there is a balance between the various interests at stake, including protection of intellectual freedom. In the present case, Mr Strotzer will therefore have to argue better about the use of his Internet connection by third parties, including mother and father.
The case of shared Internet connections can go beyond home and involve broader situations, such as the communities of students, workers, friends, as well as public WiFi connections. The European ruling does not oblige the holder of the Internet connection to ensure the identification of each user, rather to make themselves more cooperative with the courts in the search for offenders. The European court evokes a possible remedy, namely an objective liability of the Internet subscriber (as it happens with cars), but this seems to be an extreme solution, to be used only when the national legislator does not allow operations to identify the offenders among those who have access to a shared Internet connection.
The present case may seem a little excessive given the family context from which it originates, but it must be borne in mind that the objective of the judicial action was not the illicit use of a protected content (a condemnable action but with a modest impact on the publisher), rather the uploading of a file on a peer-to-peer platform accessible to anyone.
NB: on September 12 the copyright provisions regarding publishers have been approved, and the content of the post is more valid than ever!
Is the current European copyright reform something really good for press and journalists? While mainstream newspapers publish appeals in support of the reform, the reality seems to be more complex and fragmented. Beside traditional publishers vehemently pushing for the approval of the new rules, innovative and online press are against. An important group of journalists sent a letter to the Parliament supporting the new bill, while others have started to publish opposite positions (see for instance Luca Sofri and Federico Ferrazza of WiredItalia. I have personally talked with various journalists and some of them do not understand or support the reform, while remaining silent (maybe to avoid potential retaliations by their publishers). Such discrepancies within the press sector is an evident signal that this reform is problematic and it may not be as good as it was conceived at be beginning.
It is all about Art. 11 of the Copyright Reform proposal providing for a remuneration (technically: an ancillary copyright) that online platforms (mainly news aggregators) should pay to the publishers for their news reported by them, entirely or via excerpts (the so-called snippets). No one has figured out how this payment should be collected and how much could amount at the end. However, it is unlikely that the most important target of publishers, such as Google, will pay a single penny to publishers. By contrast, Google may adopt different way to avoid such payment:
1. it will stop its aggregation service (Google News) in case of approval of the new rules, as it already happened in Spain, thus causing substantial damage to publishers which were profiting of his traffic;
2. it could limit the aggregation of news to the mere title of the article with the hyperlink, as it already happened in Germany;
3. it could negotiate the ancillary copyright to an amount equal to zero, by bargaining its indexation and traffic service (which until now was for free). These conditions may be acceptable for some publishers but not to others. However, the latter will end up being excluded from the aggregation services and may suffer a competitive disadvantage against the publishers who have accepted it. Probably, at the end, everyone will have to accept the conditions offered by Google, because being the sole publisher excluded by Google News will be detrimental.
Without Google to pay, it is doubtful whether the reform will provide any penny to the publishers, since the other news aggregators left, small guys or start-ups, would probably close down that business or would be unable, in any case, to provide the cash flow expected from Google.
It has doubtful whether the reform could be applied to Google as such, that is to say to the search engine. In such a case Google could simply de-index the European press with an enormous damage for the publishers.
It has been argued that Facebook could be an alternative target for the publishers, but the current reform will not help on that. News on Facebook are uploaded by publishers themselves (which may also have agreements with Zuckerberg’s company) or shared by users. Thus, the ancillary copyright could not apply, although there might be some uncertainty with regard to the previews of articles (are such previews part of the hyperlinks or do they constitute a distinct act of communication to the public?).
To sum up, whatever will happen in the Parliament (on September 12, when the plenary session will have to vote) or later (in the Trilogue procedure) publishers are running the risk to remain with nothing, even if the reform will be approved in the best ideal form. So much ado about nothing.
How could we end up with this paradoxical situation? Everybody agree that press and journalists should be able to be remunerated adequately and that a solution should be found for the impact suffered because of the digital transition, which has drastically affected the traditional press business. However, the European copyright reform started in the wrong way, pushed by German publishers convinced that the solution simply consisted in a mechanism to force Google to share some of its profits. The German commissioner Oettinger endorsed the proposal.
However, this initiative has not worked out, firstly because the Google News business have been overestimated. Users have access to news mainly via Google search engine and Flipboard, and then via e-mail, apps and Facebook. Google News is well below in this rank and in fact Google would prefer to close it, instead of paying publishers. The rest of the news aggregation market is highly fragmented and poorly financed, therefore there is nobody else who could provide publishers with a substantial financial stream based on the news aggregation business.
But, more importantly, the press market is changing. The high-quality publishing industry is progressively migrating toward paying models, while free-to-view press, still remunerated with advertising, is left for generic or less qualified news services. This means that the current copyright reform is based on a model – Internet traffic and advertising – which is disappearing, at least in economic terms. In other words, this copyright reform is old-fashioned even before to exist in legal terms.
The new copyright rules would however be in force to operate some negative effects: the news aggregation business model would risk to be killed since, at the end, only Google was able to extract some money from that because of profiling activity of users. Such source of revenue would be excluded for other operators and start-up which do not own the same critical mass of data of Google. Therefore, they will close or will never start.
The good news is that other negative effects should be avoided (at least we hope): the latest legislative drafts seem to exclude the application of the ancillary right upon hyperlinks and upon users and individuals (other than digital companies). The bad news is the small and SME companies are in the scope of the linktax (while an exemption is foreseen for videosharing and filtrers under art. 13).
At the very end, the most dramatic consequence of this copyright reform is that politics and media believe that the legislative initiative will solve the economic problems of the press sector, while it will not. Unfortunately, it will take some years to understand the mistake, because of the time of the legislative process, the implementation period and the time for the assessment of the effects. Something between 5 and 6 years will be lost, while they could have been devoted, instead, to more effective and reasoned interventions.
Today’s decision of the European Commission strikes at the heart of the dominance of Google, namely its dominance in the Internet search, but not its business model and its ability to provide popular and innovative services. The Commission has in fact sanctioned the (alledged) behaviors through which Google – starting from the free installation of Android – has consolidated its dominant position as a general Internet search engine over the years. From this dominant position (90% in most European markets) derives wealth and power of Google: thanks to the ability to analyze the traffic of (almost) all users who do searches on the Internet, Google has accumulated over the years a huge quantity of information. These big data can then be monetized elsewhere, particularly in online advertising. A per-se lecit activity, without doubts: but the problem, according to Commissioner Vestager, lies with the conduits through which Google is suspected to have eliminated potential competition from other search engines, imposing in various ways to the manufacturers of Android smartphones the pre -installation of Google Search.
The case in question is therefore crucial for Google’s global commercial strategy, much more than the Google Shopping case, which now appears of secondary importance, because – unlike Google Search – the online comparison market is ancillary and not central to the Californian search engine. Because of that, what really matters in the decision of Brussels is not the fine of € 4.3 billion, a sum-monstre that could however be placed one off on the budget without too much pain on the part of a company of this size (31 billion turnover in the first quarter of 2018). What instead worries Google is the order of the Vestager to end, within 90 days, the alleged abusive behaviors: in other words, the producers of Android smartphones should be entitled to pre-install any app, including search engines other than Google Search. The impact of these new rules on the business of Google is substantial but will materialize only in the long run, since it will take some time for competitors to (re)emerge. The European Commission will also monitor on this phase: it is probable that, unlike in the past, Google (as well as other dominant OTTs) will not be allowed to buy and incorporate potential competitors.
If the European Commission’s analysis is correct – but we will only know it at the end of the inevitable Google’s appeal – users will only have to be happy: not only will Google continue to continue its business, but there will be room for potential competitors, a novelty for many Internet users, many of whom – for age reasons – have never imagined the possibility of a search engine other than Google.
Some final considerations: someone will say that the sanction to Google is an act of war of Europe against US, and now Trump will well impose duties on German and French cars, and maybe even on Ferrari and Parmesan. In truth, the biggest Google complainants are US companies, which today have succeeded in obtaining in Brussels what Washington never delivered until now. The same happened in 2004 to Microsoft, which had been attacked by Sun Microsystem in front of the Commission. In other words, the great US antitrust battles are now being played in Europe, not in the United States.
From this it derives another consequence: if there were no European Union with supranational institutions with binding powers, such as the European Commission with its antitrust sanctioning powers, the European states would be defenseless in the face of global multinationals, should they be US, European or Chinese. Those who think they can govern the great global issues from an exclusively regional perspective should be aware of it.
(NB: the original version of this article was published in Italian on La Stampa)
European Union and Japan agreed to create the world’s largest area of safe data flows, allowing their companies to securely process personal data of both European and Japanese citizens anywhere within this vast territory. This creates the digital market among the largest but above all the most important in the world, made up of almost 640 million consumers with an average spending capacity higher than the world population.
For European and Japanese companies it is a breath of fresh air in times of commercial wars. Thanks to the agreement, exchanges will be facilitated, especially in services and sophisticated and high-tech goods, which are mainly based on the processing of personal data of users: ranging from services via the Internet, from the most common (e-commerce and billing) to more sophisticated ones (e.g. cloud), but also to many innovative assets like connected-TV, consoles for electronic games, all kinds of IT assets. Even seemingly less sophisticated but valuable assets, such as luxury or gastronomy, will benefit the data agreement, to the extent that the sellers need marketing policies focused on certain customer segments.
European Union and Japan have agreed that the respective data protection systems (in Europe the famous “GDPR” entered into force on 25 May 2018) are equivalent and therefore the respective companies will be able to process and transfer consumer data anywhere from Lisbon to Tokyo in an homogeneous framework, without having to question whether something is allowed or not from a coast to another of this huge market. Conversely, in the absence of such an agreement a European or Japanese company should instead make an ad hoc analysis whenever its business involves the processing of data of the other party’s citizens, and whether is is lawful under the other country’s legislation.
It is interesting to note that the agreement between the European Union and Japan does not constitute a compromise between the two parties: on the contrary, it is basically Japan that has agreed to adjust its data protection legislation to the European GDPR, which sets more rigorous levels of protection.
From a geo-economic point of view, the agreement is extremely significant, because it happens during the explosion of research on the Internet of things, connected cars, robotics and artificial intelligence, extremely sophisticated sectors which need access to a large amount of user data. The creation of this unique big data market helps European and Japanese industry that normally suffer the competition of US and Chinese, which usually enjoy greater economies of scale as well as an accumulation of data (also of European citizens). This is an advantage, however, that will tend to decrease, at least as regards the processing of data of European and Japanese citizens, since US and Chinese can no longer freely process such information after May 25, 2018 (date of entry into force of the GDPR).
In fact, although US and Chinese may be able to do at home what they want (in compliance with the respective local legislation), it is not the same for their global businesses, which will have to comply with the GDPR. This is a problem that Google, Facebook and other non-European OTTs are considering seriously: in other words, even if a company is quartered in California, it is unthinkable to divide its global data business depending on whether data of European citizens are involved or not. From a technological point of view such a separation would be complicated and expensive, and there would still be the risk of European fines for every mistake. Therefore, it may be better, at the end, to adapt the whole business to the European GDPR, even if US legislation is less stringent. At this point, however, it would be even better for US OTTs to adapt US legislation to European legislation, that is, to the GDPR, in order to close the circle.
It is precisely what Japan has done, anticipating and resolving the problem of its multinationals, whereas Trump’s administration, committed to the contrary in worsening international trade, is not catching this opportunity in time. But they will have too, sooner or later.
There is much debate about the Copyright reform and I see the need to make some clarification based on the actual draft of the legal provision to be voted by the European Parliament next week (July 5th), and not on chats and tweets.
Is art. 13 of the new Copyright directive imposing filters?
Yes, but without using the word “filters” and instead calling them “appropriate and proportionate measures leading to the non-availability on those services of works or other subject matter infringing copyright or related-rights”. In technical terms, this can only consists in a software detecting and blocking content. It cannot be done manually (otherwise it would take one week to upload content) and it does not work ex post (otherwise the current notice & take down system would be sufficient). Thus, we are talking about automatically preventive/preliminary filters. Even MEP Cavada, one of the rapporteurs, admitted it enthusiatically in one of his outstanding tweets.
Well, filters are forbidden by the EU jurisprudence (cases Sabam and Netlog), aren’t they?
Yes, and this is the reason why the European Parliament plays with words and avoids to call them “filters”. In addition, art. 13 tries to make such filters to appear the result of a private negotiation, nota an imposition by law. Fact is, according to the provision drafted by the European Parliament, the “appropriate measures” (aka filters) should be “taken” by the video sharing platforms “in cooperation with stakeholders” in case they do not agree on a licensing agreement (aka: paying money) with regard to content uploaded by end users
Why it is so important to make such filters to appear to be a private matter, rather than an imposition by law?
Because when a filter is voluntarily applied by a platform, then it is not prohibited by the jurisprudence of the European court, which instead concerns filters imposed by the law. Youtube has developed a kind of voluntary filters mechanism called Content ID.
Ok, but at the end, art. 13 concerns mandatory or privately negotiated filters?
It is about mandatory filters imposed by the law, because video-sharing platform are obliged to adopt then to avoid liability for the uploaded content in the absence of a license agreement with rights-holders. However, the tortuous and byzantine wording used by the European Parliament create intentional confusion and misunderstanding, so as someone may believe that such filters are a private decision.
Well, could video-sharing platform simply agree on licensing agreement to avoid the obligation to adopt filters?
Yes, they could. However, it would be an unbalanced negotiation, because when just one party (i.e. the platform) risks legal consequences (i.e. the obligation to adopt appropriate measures, aka filters) in case of failure of reach an agreement, the negotiation cannot be balanced. The other party (the rights-holders) will be in the position to charge more than a fair price.
Ok, then the choice for sharing platforms would be between unfair prices, on one side, and adopting filters, on the other. What will happen, at the end?
With the exception of Google/Youtube (which can resist any legal dispute and has already developed a legal filtering technology, namely Content ID) the other (small) players will probably find more convenient to adopt a simple filter technology provided by the rights-holders themselves, blocking everything the latter wish. Small platforms do not have money to develop a proprietary filters technology like Youtube Content ID or to resist in legal disputes against stakeholders.
Ok, this is how the censorship machine work. But I have read that the new Copyright directive provides for an exception for freedom of speech, parodies, memes ecc ….
Yes, but it will not work, because it is mentioned, in a restrictive way, only in a recital (n. 21a) of the Directive, not in an article, and it is envisaged only in favor of users, not platforms (so reads recital 21d). This means that platforms’ filters will still have the obligation to block and remove, in automatic way, everything that rights-holders consider their property, even if it is just an extract, fragment, elaboration, memes of a proprietary works. Then it will be up eventually to the user to make a claim, then there will be a kind of proceeding for the redress. But is is unlikely that a normal individual will start this procedure against the rights-holder, while the platform will be simply pissed-off, because it is not their job manage judicial proceedings and are not equipped for that.
Ok, lets’ sum up: the first step is that everything is blocked automatically. Then, the user can appeal, wait for justification from the right-holder and then somebody decides to reinstate or not reinstate the content. If it is still not reinstated, the user can further appeal.
Yes, and this is the reason why so many Internet father, scientists, the Italian data protection supervisor Antonello Soro are against. The UN special rapporteur reads it: “I am concerned that the restriction of user-generated content before its publication subjects users to restrictions on freedom of expression without prior judicial review of the legality, necessity and proportionality of such restrictions. Exacerbating these concerns is the reality that content filtering technologies are not equipped to perform context-sensitive interpretations of the valid scope of limitations and exceptions to copyright, such as fair comment or reporting, teaching, criticism, satire and parody”.
One last question: do you work for Google?
No, I never got a penny from them.
Who benefits, in the end, this European copyright reform?
There are two rules on which everyone’s attention is polarized: art. 11 on ancillary rights, which would enable publishers to demand payment whenever an article, or even a short excerpt of it (the so-called “snippet”) is published on the web, irrespective when this is done simply to quote the article or to make it more accessible from the Internet chaos through a search engine; and then art. 13, which aims to facilitate licensing agreements between sharing platforms and content owners, in relation to content shared by users. In the absence of such licensing agreements, there would be an obligation to adopt “appropriate measures” to ensure the “non-availability” of such non-licensed content on the platform, even if they are posted by third parties, and not by the platform. The rule, with a tortuous and Byzantine text, avoids talking about preventive filters, since these technological tools would risk, in principle, to be illicit according to European jurisprudence (Sabam and Netlog cases). The European Parliament thus develop a kind of new-language, trying to name with different words what cannot be said. But there is no doubt that preventive filters are involved by this article, since the mechanisms of ex-post removal already exist under European legislation, while the compelling demands of the content industry have always been directed to ask for an active and preventive involvement by of online platforms. If the law passed, therefore, legal battles would start to make preventive filtering mechanisms possible, while not being named in this way to avoid to contrast with the European court. So, we are in full new-language.
From the above rules we extract two very important common elements: first, that the main target of the new legislation is Google (more than the American platforms in general,) that operates as Google News in the case of art. 11 and as Youtube in the case of art. 13; second, the proponents firstly claim from Google (and then from others, if there will be) a sort of economic compensation for the c.d. “Value gap” that would be, according to some of them, the gap between what is generated by creative content on the web and how much is returned to those who hold rights.
The first subject, that of Google as a target, reflects an understandable battle which is however conducted with wrong weapons and, in addition, with the risk of creating collateral damages far worse than the potential benefits. Where some national states (Germany and Spain) have anticipated the measure of art. 11, Google responded with de-indexing news in concerned countries, in order not to pay the unexpected tax. As a result, publishers have asked to give up the tax but in general the impact on online operators has been heavy, because not everyone can afford to do what Google does, that is to close a business. And here the real problem emerges, namely the dominant position of Google, whose services are essential to the Internet ecosystem, including newcomers such as publishers who moved from traditional press to the online dimension.
Google appears dominant even when it operators like Youtube and, should the provision on filters and licenses under Article 13 be approved, it should not do much: Gioogle has already developed an ad hoc technology, the c.d. Content ID costing millions of bucks, and is therefore well equipped to resist the worst scenario. But could all the others do the same? Google already has few competitors but with the new legislation it will not have any more, because nobody could afford to develop a technology like that of Content ID on infinitely lower scales. For the few of them it would only remain to pay content providers, but in the end the tax would apply to the ones with poor wallets, not Google certainly.
Then there is the argument of the value-gap. Here too the battle is understandable, but still one should consider whether the problem is approached in the right way. The digital revolution had an overwhelming effect on consolidated business models: a clear example is the music market, which came out completely transformed by the Internet devolution, from the small shops selling CDs for 30 euros to online music stores offering everything in streaming at flat rates. In telecommunications there was a similar process: from phone calls and SMS at a high price we moved to Skype, WhatsApp and flat rates. In both cases, the traditional owners (i.e. recording companies and large telcos) did not like the change, but they know that back to the past is now unthinkable: technology does not allow it and consumers do not want it.
For publishers and content providers, a similar dilemma arose: resisting the digital devolution or embracing? At the end, they have opted for a conservative and anti-historical position, thanks above all to the help offered by the European Commission lead in last years by the German Oettinger (a commissioner who appeared sympathetic to the problems of the great German publishers ): so, despite the disruptive power of technology, publishers have invented a sort of reimbursement ex-lege that, moreover, creates more damage to the online industry in general (especially the European one, made up of dwarfs and start-ups ) rather than large American platforms.
In truth, the change created by the digital revolution should be controlled and exploited, rather than suffered passively, even when submission may seem less bitter because legislation grant some small amounts of repayment. This is a solution that could benefit some quarterly profit & loss statements, but only for a short time, since it could not be a structural solution.
The real problem of the value gap is why the wealth created by digital revolution remains in the hands of a few operators, as in the case of the accumulation of capital in 1800 AD. The problem is not the value gap itself, but its concentration. On this theme, the European copyright reform completely misses the target but, on the contrary, lays the foundations for a consolidation of online monopolies giving them, paradoxically, also a sort of mediation power on the circulation of information (as warned by the Italian Data protection chair, Antonello Soro).
So, who benefits from this copyright reform?
I have the pleasure to report here the English translation of the article that Stefano Quintarelli, a pioneer of the Italian Internet, wrote for Il Foglio some days ago. I have been astonished by the capability of this article to easily explain and summarise in few chapters how the digital revolution is changing the world and what should be done to keep a fair balance between innovation, market economy and human welfare.
Intermediated of the world, unite!
Intermediati di tutto il mondo, unitevi!
Intermediados del mundo, unìos!
Intermediados do mundo, uni-vos!
Intermédiés de tous les pays, unissez-vous !
The industrial revolution led to a profound social reorganization with respect to the previous predominantly agricultural economy. Economic power, very concentrated, conditioned the political power. In the USA the so called robber barons, thanks to their control over steel and oil, strengthened their economic power by controlling the economy and society to a great extent. The working class of salaried workers was born, and, with it, the conflict with the capitalists who owned the means of production. The market pressure was discharged on the workers who often lived at the limits of subsistence, and the social conflicts, that sometimes resulted in violent movements, were intensified. The rich oligarchs conditioned information, political power and the judiciary.
Thanks to the power they had, not mitigated by institutions and protecting regulation, added value was accumulated by capital, to the detriment of workers.
From the mid-nineteenth century and for most of the twentieth century the world divided on the basis of alternative solutions to the conflict in the distribution of the value between capital and labor.
The paradigm of this conflict was summarized in the final words of the Communist Manifesto of Marx and Engels which ended with the famous phrase “Workers of the world, unite!”.
An answer from the socialist states were state companies, disconnected from the market in order to isolate the pressure on wages, together with a strict regulation of labor relations mediated by the Party. In the West there prevailed a more articulated model of regulation that saw the emergence of institutions such as the unions with their right to strike; legislative interventions that defined minimum and incompressible rights for workers in matters of work, retirement and health; the progressive possibility of worker participation in the widespread ownership of companies; the birth of the Antitrust Authority to mitigate economic power and with it the influence of economic powers on politics. The Western model that emerged victorious after the end of the Soviet utopia is however put on the ropes by the Digital Revolution and needs a rethinking or, at least, some significant interventions.
Basic research leads to developments in physics which in turn are incorporated into the electronic devices we use every day. The famous Moore Law foresees an exponential growth of processing, archiving and communication capacity, thanks to a periodic doubling of the performance / price ratio of electronic devices, motivated in a capacity to create increasingly miniaturized base components. The marginal cost of processing, archiving and communication is therefore (or rapidly) substantially nil and the possibilities enormously greater. Artificial intelligence is the terminology coined to identify the product of the exponential growth of processing possibilities; Big Data to identify the possibilities of large storage; Internet of Things for the possibility of interconnection. All this in a synergistic game so that, at ever increasing speed, increasingly cheaper devices spread and interconnect more and more; the related data are recorded and archived, analyzed and processed. Some visionaries believe that it will come to a time when machines will have superior capabilities to those of a human and that human beings will widely include electronic parts to restore or increase their habilities. This moment of human-electronic convergence is called singularity. That this exponential growth may continue for a long time to reach singularity is nevertheless an act of faith. The ITRS roadmap (International Technology Roadmap for Semiconductors) is the development plan defined by the electronics manufacturers and sets in 2021 the year in which the physical limit of miniaturization will be reached. The miniaturization of electronic components can not go further due to quantum interference on atomic dimensions. Singularitarians answer that this wall will be overcome and the exponential development will continue thanks to the invention of something still unimagined. This is the act of faith.
Nevertheless, even if singularity is not achieved, the effects on society will be very significant. Once the physical limit of development has been reached, competition that can no longer be expressed in performance increases will be expressed in price reductions and electronic devices will permeate the world at a scale hard to imagine. Our ability to access our computing systems, the storage of our data and their communication will no longer be physically confined to our devices but widespread. Our “computer” will be defined by our ability to access such widespread processing and data, by recognizing our identity (the ultimate competitive asset), wherever we are.
From the computer on our table, from the computer to our pockets, we will arrive – literally – to live in a computer. Thanks to the zero marginal cost, everything that can be calculated will be; everything that can be sensed and archived will be. Everything that can be interconnected will be.
All these phenomena has accelerated over the last twelve years, with the development of cellular wireless networks, in a virtuous circle of increase in possibilities fueled by the synergy of increase in server processing capacity, the transmission capacity of networks, the processing capacity of pocket computers (smartphones). All this accompanied by an unprecedented speed of diffusion of technical means, by a democratization of access to technologies. In every system where information is introduced, entropy decreases and the system is optimized. Our ability to solve problems, to optimize the use of resources, has increased enormously in recent years. Just think about the availability of information and the possibility of collaboration of researchers in the medical, energy or food fields; optimization of transport and logistics thanks to navigation systems with full coordination and knowledge; to fine grained production control and inventory reduction; to the dematerialization of many activities, reducing the material impact on the planet.
For over ten thousand years the world has experienced drastic changes but much slower, which required generations to unfold, allowing society to understand and adapt (even if sometimes such adaptations were violent).
In this case, this development of the intangible economy was sudden. It would seem that Divine Providence intervened on a world that consumes material resources at a level well above the possibilities of sustainability, offering an incomparable optimization tool.
Every human sector is impacted and so many complexities we face today are rooted in these reasons.
I refer to material dimension and immaterial dimension and not to real and virtual worlds. They are not worlds but dimensions because every human activity previously based on material instruments and relations is to some extent touched by immateriality. Except for some cases of full replacement of a previous material activity with a new intangible modality, in general the immaterial does not exclude the material but integrates it, it supplements it in the same way in which the length is not an alternative to the width but supplements it. And it’s all very real, not virtual. The term “virtual”, from the medieval Latin virtualis, brings with it a connotation of unexpressed potentiality. But this immaterial dimension, in which social, economic and political relations take place, is very real, not potential or unexpressed.
The basic rules of the immaterial dimension are very different from those of the material dimension. In the traditional material dimension, producing, reproducing, storing, transferring and manipulating have significant (economic and environmental impact) costs. In this recent immaterial dimension these costs are marginal or zero. Materiality is intrinsically disconnected as it is composed of objects that do not communicate with each other; its frictions require time to be overcome, cause wear and returns tend to decrease. The immaterial, which is intrinsically connected, is characterized by real-time feedback (and therefore the possibility of data collection, analysis, customization and adaptation), a lack of wear and the possibility of increasing returns.
Except for cases of great standardization and repetitiveness, assisted by specific machines, work in the material dimension is carried out by people who need production tools, input to work on, cycles of rest and leisure. With the industrial revolution, this led to the definition of work shifts and commuting to carry out the activity, with consequent impacts on the structure of cities, trade, etc.
A work that can be done in the immaterial dimension, if repetitive can be done by machines that do not know shifts; if with components of creativity and relationality it can be done by people from any place in the world, also benefiting from the effect of time zones to cover the day.
The digital umbilical cord that binds the parties in an immaterial relationship is exploited to update provided product / service with frequent releases and is customized thanks to the acquisition and knowledge of the data. This personalization goes as far as the individual, placing new questions on the availability of data as a competitive asset.
Up to now, the information available in common to a community has always been an important factor in maintaining harmony and cohesion, and has even favored the definition of social rites. With the individual personalization of the information flow, the role of the media to act as a social metronome erodes. The personalization of the information received by users, with the current incentives for those who manage the algorithms, determines the exclusion of unwelcome information and increases the frequency of messages confirming their convictions and bias, favoring with the so-called “filter bubbles” ( filtered information bubbles) the acquisition of informations you like, regardless of their degree of truth and correctness. The nil marginal costs in the production and distribution of information, eliminated the cost barriers that constituted a friction to their creation and circulation; a reduction in the potential barriers that constituted a limitation to the dissemination of information, has multiplied by orders of magnitude the spread of fake news that feed filter bubbles. The accessibility to information on each subject, even on specialized topics, previously limited to insiders, is now ubiquitous at no cost, fueling the perception of an extreme reduction of distance between experts, enthusiasts and casual readers. This leads to a perception of flattening of hierarchies that pushes the trivialization of experience, an effect multiplied by the algorithms of the information intermediaries whose objective function is not the correctness of information but the maximization of the time spent by users on their online services. The fact that this produces effects on politics is well-known: from the resurgence of interactions driven by emphasis (also determined by impulsivity favored by real time and a wrong perception of anonymity favored by isolation and instrumental mediation of communication). The effects on the electoral outcome are less known, even though Facebook has conducted social experiments that have shown that they can influence the rate of participation in the vote and recently Zuckerberg in a written letter to the European Parliament said he can’t rule out that the social network is used in such a way as to produce manipulative effects on the votes.
Private property, the foundation of the Western model of response to the challenges of industrialization, is rooted in the intrinsic properties of materiality in which goods are rival and excludable. Consequently, the assets are carriers of rights, immunities, faculties and privileges defined and codified in laws that are based on rivality and excludability. The whole legal system is also based on these two characteristics.
The control of the assets in the immaterial dimension does not take place on the basis of rivality and excludability. An information, once communicated to a third party, does not diminish the possibility of enjoying it on the part of the communicator. The aphorism of President Thomas Jefferson is famous: “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me”. In order to maintain control and replicate rivality and excludability, an intangible asset / service is not placed in the recipient’s full availability as it happens with a tangible asset but often, if the business model and market allows for it, it is provided connected to a centralized control and invariably accompanied by a contract that regulates in detail the rights, immunities, faculties and privileges, which, in a largely asymmetrical arm-wrestling, invariably favors those who provide the good / service with respect to those who use it. In the immaterial dimension, private property, for users, does not exist.
Starting from the 90s of the last century, while the exponential path of digital technologies (calculation, archiving, communication) began to become perceptible, policy makers decided to favor their development. There was talk of an information society with the – correct – idea that it would have had a lower impact on the planet’s resources than a development model based on a material economy. Asymmetric rules have been made to promote competition and with it the birth and growth of alternative telecommunications operators and service providers. The modalities of monetization were not clear, and neither were the business models, and neither were the times when a critical mass capable of sustaining an immaterial economy would be reached. A little at a time these clouds have thinned out. The critical mass has been reached years ago and with it the business models and the possibilities of monetization have become very clear.
Entrepreneurs have learned to exploit this regulation to their advantage by using intellectual property laws to impose restrictive contractual conditions for their users, exploiting network effects to benefit from increasing returns (winning over the first user, which needs to be convinced, costs a lot of more than not winning the billionth user who prays to be admitted to the interaction with others and hopes never to be expelled) and to introduce lock-in factors (de facto constraints in services) to limit the mobility of users.
While in other industries we impose phone number portability, credit, bank loan, electricity meter or gas portability, to promote competition, this is not the case online.
Consequently, those who conquer world dominance in a sector can hardly be undermined. Try telling your children to leave Whatsapp and start using Indoona. They will never do it. On Whatsapp they can interact with all their friends; sending them to Indoona would be like condemning them to a nearly desert island. The same applies to sellers with respect to Amazon, hoteliers with respect to Booking, restaurateurs with respect to Thefork, renters with respect to AirBnb, drivers with respect to Uber, and so on. When an operator is about to win in an industry, investors will pour huge amounts of capital in such a way as to make it the de facto choice for that sector. Competition ceases to be IN the market but FOR the market. You do not compete in the brokerage market for holiday homes, but to conquer an absolute, unshakeable leadership position in a market niche.
The marketing costs for adopting a service are today the most important investment in an intangible operator, orders of magnitude larger than technological ones. They are not technological operators, they are market intermediaries that intercept a share of the added value that flows between producers and consumers. This creates monopolistic or oligopolistic double-sided markets, with intermediaries who dictate their own laws and, on the one hand, consumers who have little or no other choice and, on the other, producers who must comply with these rules in order to gain access to the market. How many persons know that if someone downloads a software and installs it on a Macintosh, the payment goes to the software manufacturer while if it is done on an iPad or an iPhone, 30% goes to Apple? The same applies to a newspaper, a song, a book on Apple, Android, Amazon. Or that 25% of the room price (including VAT) goes to Booking? – nearly 100% of the hotelier’s margin, which must however pay the living costs, the maintenance and – not a tiny detail – the staff? Who is aware of the working conditions of a Uber driver or Foodora rider ? I do not mean these are not opportunities for occasional jobs that can constitute a supplementary income for someone in a moment of a person’s life. If they cease to be occasional and become continuous, subjected to an algorithmic control much stronger than it was previously possible in a traditional work relationship, an issue arises re. regulatory asymmetries that favor a type of activity compared to another, by inclining the competitive plan towards immaterial monopolist / oligopolistic intermediaries.
Let’s stand on the immaterial monopolists / oligopolists side.
They were good. They had an idea, a vision, determination, delivery hability far superior to their competitors. They conquered a dominant position in a niche of a new immaterial intermediation thanks to hard work, great skills and big capitals (at the beginning, until it was clear to the venture capitalists that the winners would be, tightening their belt).
Now they are monopolists (or maybe oligopolists); they
and I surely forget some other aspects …
We are entering into the merits of a question that is quintessentially political. Defining politics as the tool to achieve future, socially desirable goals.
We can no longer limit the analysis to capital and labor, we must also include information in the equation and the digital revolution that expresses it.
Can we think of a future in which, for every economic activity carried out by producers – capital and labor – those who control the third variable – information – are few monopolist / oligopolistic intermediaries (monopsonists / oligopsonists) who extract value from the control of intermediation, squeezing the value from capital and, in cascade, from work?
Capitalism has found ways of balancing the conflict between labor and capital that have surpassed the socialist / communist model of collectivization of the means of production.
In just a few years, the traditional capital-labor conflict has been wrapped and dominated by another conflict, a conflict with information that, through the control of intermediation, presses on both.
In just a few years, the 5 largest companies in the world are operators that rely on their dominance on the intermediation of some vertical market. Three entrepreneurs control an economic empire superior to that of many OECD states.
We are observing a monopolization in the domination of the relevance of the immaterial dimension over the material dimension, in the creation and distribution of wealth, with a rising conflict between intermediaries and intermediated, with the compression of rights and guarantees for large social bodies and with a significant political influence.
A dominance that we could define as “info-plutocracy”.
The info-plutocracy of the intermediators is based on a centralized control of information, both in terms of data (privacy implications are an epiphenomenon) and of processes with which such data are collected, processed, communicated and used.
It is a model opposite to the one with which the Internet was born and developed.
For many decades, the Internet was built on protocols, public rules that everyone could incorporate into their software, which established the ways in which computers (servers and clients) communicated, and anyone could build clients and servers and compete. Telephony has also been based on similar mechanisms, from the devices (telephones, switchboards, exchanges, etc.) to the network devices used by the operators and the services developed on them. Two widekly known examples are text messages and e-mail. A decentralization based on a wide variety of servers and clients that interoperate so that anyone can send an SMS or an email to anyone without worrying about the operator or service used by their receiver. An opposite example are Whatsapp, Facebook, Instagram, Snapchat. centralized services which can be used only by joining the same, single service, managed by a single operator.
This closed approach, once the planetary domination has been established, reduces competition and reduces the biodiversity of the infosphere, with the effects I described above. The opposite of the spirit of openness and maximum contendability of users who gave birth and grew the Internet so quickly.
The effects of the digital revolution extend to all markets intermediated by monopoly / oligopolist operators and monopsonists / oligopsonists.
Summing up, the conflict between capitalists and workers induced by the industrial revolution of the eighteenth and nineteenth centuries has developed in the relationship between capital and labor with opposing ideologies that, after many decades, have seen the prevalence over the socialist / communist model, of a model of mass capitalism tempered with rules of protection and guarantee for consumers. The debate between right and left political sides has developed on the point of equilibrium between them.
The conflict between intermediators and intermediaries induced by the digital revolution of the twenty-first century develops in the relationship between information and production (understood as the product of capital and labor) and is starting a social confrontation between a model of management of centralized information that has developed in recent years (and supported by large technological multinationals) and a decentralized model promoted by some avant-gardes (philosophical, technological, political, etc.), a debate with profound differences between those who advocate closed systems and environments and those who fight for decentralization, to foster greater competition and the possibility for users contestability.
|Relationship Capital – Labor|
|Mass capitalism||Socialism / Communism|
Categories of 18th and 19th century
|Relationship Information vs. Production (Capital & Labor)|
|Intermediators||Intermediated (Capital & Labor)|
Categories of the 21st century
or, representing the conflicts in another way:
Former conflict: Capital vs. Labor
New conflict: Information vs. ( Capital & Labor)
For how long will it be possible not to detect this “info-plutocracy” and this new conflict between intermediators and intermediaries? Will we be able to allow it for a long time to expand, vertical by vertical, to other sectors of the economy, hoping that a new invisible hand will solve the problems? Does anyone think that it is possible to un-invent digital technologies and the Internet that is their expression ? Or can we think of socially desirable goals that require political intervention? And what kind of interventions?
The reduction of tax revenues, the conditioning of political opinion, the pressures on traditional operators are, as a matter of fact, just representations of different points of view of the same phenomenon: the prevalence of monopolistic / monopsonistic information on capital and labor.
I think there is not a simple answer to these problems, like increasing taxes, as some would like to do. These extra costs, save some cases, would be transferred to consumers or producers.
In some cases it has been proposed to build “state champions” (such as a public search engine, or a social network or a public platform for professional bidding). In other cases it was also proposed to consider social networking as a non-duplicable social infrastructure and someone even proposed nationalization. These are hypotheses that bring to my mind the Soviet response to the pressures of industrialization through state-owned companies.
I do not believe that such measures with a totalitarian fragrance can work; I believe they could generate bigger problems to adjacent areas (from social control to privacy vulnerabilities and other fundamental rights) than those they try to solve.
I believe we need to respond as Western society has responded to the industrial revolution, that is, with more market market oriented interventions, favouring less concentration of information and regulation of negative externalities. I believe we should not give in to the logic of the inevitability of closed systems and we must stand firmly on the side of openness.
To tackle the digital revolution we need a comprehensive package of measures that are based on the principles of what we have already done in the period of the industrial revolution: new forms of taxation, innovations in welfare, workers’ and rights, public controls on guarantee for consumers and, fundamentally, increased competition, procompetitive rules, user contendibility, interoperability of services, etc.
But this can hardly happen without an awareness of this new conflict of intermediation between information on the one hand and production (that is, the combination of capital and labor) on the other and without this awareness becoming translated into political action.
In order for this political action to take place, it is necessary for the intermediaries to demand it by coalescing into awareness:
Intermediated of the world, unite!
It may become impossible to share memes, parodies, artistic or political videos because the filtering obligation voted today by the European Parliament would require online platforms to make a prior check on all the creations shared by users that may contain protected content. And likewise, it could become impossible to find news on the major search engines, because the latter, in order not to pay a tax on snippets, i.e. the phrases showing the descriptions of the pages that user is looking for, could suspend the service (as it is already happened in Spain and Germany because of similar national laws). Today’s decision of the European Parliament’s Legal Committee is for now only a legislative step, but this scenario could actually materialize if the European Commission’s proposal on copyright reform were to be definitively approved in the coming months by the European legislators, namely the European Parliament in plenary session and the Council of Ministers. In any case, the narrow majority that today approved the text submitted by the rapporteur Voss suggests that the struggle is still long.
NB: Please note that the Juri Committee asked to apply art. 69c of the Rules of procedure, whereby the committee can directly deal with the Trilogue negotiations without a mandate. The plenary session of the EP can however oppose this request with 10% of the votes against.
But how did we get to this point? There was no doubt that a reform of the copyright discipline, to adapt it to the evolution of the Internet, was needed: one should consider the new models of distribution and usage, the existence of new operators and intermediaries (unknown until a few years ago), as well as the need to protect and promote content effectively in a new technological and market environment. Beside these issues, however, jumped a new idea, that of the c.d. “Value gap“: the traditional content industry (especially commercial TV, audiovisual producers and major publishers) claims that a good part of their value is now “scrapped”, with the advent of the Internet, but above all of the social online platforms (primarily Youtube, but also GitHub, Instagram and eBay) that host the content uploaded by users and earn in various ways, also thanks to online advertising.
The theme is important because safeguarding a quality industry for content and journalism is fundamental in European society. However, the proposal that is being discussed in Brussels seems to cause so much collateral damage that it should be heavily rethought. It is no coincidence that the great fathers of the Internet (including Vincent Cerf, Tim Berners-Lee and Tim Wu) and the UN expert for freedom of expression, David Kaye, have recently intervened by asking to stop and restart from scratch.
In fact, despite the reform defended by large television companies has been explained to European politicians as a crusade against Google & Co, in the end mostly users, SMEs and start-up will be affected: for them the would not be anymore a free space to exchange ideas, to experiment creativity and new business models. The obligation for preventive filtering could have very serious restrictive effects, including consolidating the market around the large existing players (mostly American) that have the resources to get by, but leaving out the smallest (especially European). And one could not be sure that the European content industry would really gain: and indeed many European publishers, especially the smaller ones and who operate mainly online, have firmly opposed the reform.
The ongoing debate is showing more and more strengths and weaknesses, but above all contradictions, of the European content industry: on the one hand, this sector appears very strong in dictating its agenda to national and European legislators, because few MEPs, ministers or commissioners feel able to oppose against organizations that manage to appear as the only representatives of culture and creativity (while reality is a bit more complex); on the other hand, the same sector appears subordinate, divided and fragmented with respect to the American giants, be they Internet or producers of contents, and appears condemned to an ineluctable economic and cultural dwarfism with respect to the great global trends.
These contradictions emerged in the other fiery debate dedicated to copyright, the one on geo-blocking. Here the European content industry has won, succeeding in defending a legal framework that basically allows geo-blocking of online content, so that an Italian user can be prevented from enjoying online content delivered virtually in another country. A situation that facilitate proliferation of piracy and alternative technological solutions. This system is defended by the European industry with the argument that such territorial restrictions would be necessary to finance the production of films, as producers and distributors normally agree with exclusive territorial licenses. The market actually works like this, and European lawmakers have confirmed the status quo fearing that otherwise they would have endangered European content producers. But then, no one has realized that the right to geo-block is in fact exploited by the great American majors, who produce in US and export their cultural model, while in Europe they can obtain very high profits thanks to the power to geo-block the continent and divide it artificially into 28 (early 27) markets, an absurdity that at their home (since they have 50 states) would not be allowed. This system strengthens the cultural and creative leadership of Americans in Europe, while at home we are discussing why it is right to prevent an Estonian user from downloading an unknown film from a Hungarian site. It would be time for Europe to think again in European terms rather than in defense of national interests.
NB: this is not the place to criticize the EU, since this disconcerting picture is created above all by national vetoes and diktats, rather than by the European bureaucracy. On the contrary, where the offices of Brussels can move with greater freedom, there you can see more result: and in fact many eyes are directed towards Commissioner Vestager and its directorate of competition, who have been investigating the pay-TV market for some time. Thanks to their enforcement powers, Brussels officials could soon declare invalid the territorial agreements that gave rise to the practice of geo-blocking . If this were to happen, any new restrictive copyright reform would have to deal with a completely new scenario.
Whether the new Electronic Code for electronic communications will encourage or frustrate network investments (you will soon read different opinions about), there is something fundamentally new in the telecom reform politically agreed today by the European Trilogue: for the first time in the history of European telecom regulation, investments in very high capacity networks will become a binding target for national regulators (together with competition, single market and consumer benefits):
“promote access to, and take-up of, very high capacity data connectivity, both fixed and mobile, by all Union citizens and businesses”.
The new Code, proposed by the European Commission in September 2016, marks a radical change compared to the previous regulatory framework in that it radically addresses the urgent need for enhanced infrastructure investments. It could appear something obvious considering the public ambition for more sophisticated connectivity but, in reality, you should remind that few years ago the European Commission took a different approach: in July 2012 the former Commissioner Neelie Kroes, in order to protect financial viability of traditional European telcos, intervened publicly to protect the access price of copper networks (i.e. the traditional telephony networks used for ADSL) causing the sector to continue to stay on these obsolete networks rather than massively investing into fibres. Kroes’ scope was to keep constant the cash flow of incumbents in times of financial crises and eventually encourage altnets to roll-out their own new network, but the consequence of such choice was dramatic in terms of industrial policy: European incumbents felt encouraged to continue to exploit old-fashioned copper networks as cash machines, while being less incentivised to roll-out new full fiber networks. It follows that various FTTH industrial plan were abandoned or downsized to FTTC (where fibers are extend only to the cabinets, and not up to the premises of users as in the case of FTTH). This trend was particularly evident in some countries where the local incumbent wanted to keep alive, as long as possible, the old copper networks (see Germany and Italy for instance). Kroes’ move was maybe understandable in terms of financial stability, also considering the fear that extra-UE telcos could take over European incumbent, but caused a delay in the roll-out of European fibers which is still reflected in the different figures of fiber deployment between EU and other regional areas (US, Japan and Korea).
To make more remarkable this policy change, today’s Code’s parameters defining very-high capacity networks are defined taking into account the characteristics of optical fibers:
‘very high capacity network’ means either an electronic communications network which either consists wholly of optical fibre elements at least up to the distribution point at the serving location or any type of an electronic communications network which is capable of delivering under usual peak-time conditions similar network performance in terms of available down- and uplink bandwidth, resilience, error-related parameters, and latency and its variation. Network performance can be considered similar regardless of whether the end-user experience varies due to the inherently different characteristics of the medium by which the network ultimately connects with the network termination point”.
The aim is to favor investments in networks entirely in fiber optics up to buildings (FTTH and FTTB), thus accelerating the replacement of old and obsolete copper networks. Needless to say, the drafting of such definition has been the target of furious counter-lobbying by that telco industry that preferred more vague terms in order to include back copper upgraded networks (FTTC and vectoring).
When will this fiber devolution take place? Despite today’s agreement in the Trilogue, further formal steps are still required: some parts of the Code are still agreed in principles, while details and recitals need to be defined. The formal approval by Council and Parliament may take place only after summer and then Member States will have 24 months to implement the directive’s provisions into national law. This means that the new rule will become effective only at the end of 2020. However, since we are talking of long-term investments, it is clear that investors have already got the right signal and therefore they will immediately favor full fiber investments right now.