Digital Platforms as Coordinators of the System

It is not always simple to identify the role of digital platforms in the media ecosystem and the legal nature of the services they provide. Traditional media were subject to a mature and detailed regulatory framework, based on the liability of the editor for the content that was published and broadcasted. It was not evident whether such a regulatory framework should be automatically extended to digital platforms, particularly as, in the early days, they had a weak position in the media ecosystem or they were not even recognized as actors in these markets. Over time, however, platforms have reached a position of power that has surpassed that of any of the traditional media corporations.

Specific regulation was adopted along the decades to rule traditional media. Media were often subject to licensing (particularly terrestrial TV), restrictions to concentration, obligations on intellectual property, liabilities in case content would breach defamation laws, protection of minors, restrictions on advertising, protection of independent content producers, as well as local content producers, among other regulations. In some jurisdictions, media were closely controlled by government, including content censorship. Such restrictions were defined at a national or even at a local level.

Digital platforms evolved from the hosting and transmitting of basic content such as emails, websites, and blogs. These activities were originally considered relatively harmless, as they had limited impact on society. Platform regulation was defined at an early stage, when network effects around platforms were not mature and the potential role of platforms in the media and advertising markets was not fully understood.

In the US, Section 230 of the Communications Decency Act, adopted in 1996, provides immunity from liability to providers and users of an “interactive computer service” who publish information provided by others. When NBC identified that YouTube was airing the video “Lazy Sundays,” it asked YouTube to take down the video, as well as other 500 videos under their intellectual property rights. YouTube took down the videos, but NBC did not get the $1 billion they had requested in damages. YouTube was not liable for the content uploaded by a user; it just had to take it down at the request of the holders of the rights. The same provision was raised by eBay in the landmark case Tiffany v. eBay, adjudicated by the Second Circuit in 2010:” “For contributory trademark infringement liability..., a service provider must have more than a general knowledge or reason to know that its service is being used to sell counterfeit goods.” Therefore, eBay was not liable for trademark infringement.

In the EU, providers of the most popular “information society services” were exempted from liability by the Directive on Electronic Commerce adopted in 2000.20 The directive firstly allowed the providers of digital services to follow the regulatory regime of the country of establishment and not the regulation of the country where services were being used by consumers. Furthermore, service providers acting as intermediaries (those being a mere conduit that caught or hosted information) would be exempted from liability. They would have no general obligation to monitor the information they transmit or store. It was only expected that the platform would expeditiously remove illegal content upon obtaining actual knowledge of the illegality. This regulation was designed for ISPs and for platforms managing websites, blogs, and email services. It was subsequently extended to marketplaces such as eBay.

Video-sharing platforms were in this way subject to a privileged regulatory framework. This regulatory protection proved to be very effective against copyright infringements, YouTube being a good example. However, the exemption extended to the sector-specific obligations defined in the audiovisual legislation. In the EU, platforms were considered mere intermediaries, empowering content producers to interact with the audience. Digital platforms would not be subject to the obligations imposed on networks with “editorial responsibility” by the Audiovisual Media Services Directive.21

What seems clear at this stage, as platforms have matured, is the central role that the digital platforms play in the ecosystem created around them. Platforms may not create content, or play the role of an editor in a newspaper or TV network, but they have an increasingly active role and the possibility to shape the ecosystem around them, including traditional media. They have the power to determine the success of a content producer. They even have the power to exclude content providers from the platform. As the leading platforms concentrate and reach positions of market power, such powers are subjected to increasing levels of scrutiny.

Platforms shape their ecosystems as they decide how to match the different sides in the platform. Such matching is decided by algorithms that automatize the matching according to principles that are not always transparent and determine the success of some content and content providers against others. Let us take a closer look at the example of YouTube, following the analysis by Burgess and Green.22

YouTube originally ranked content according to objective criteria, such as the most viewed, the most favorited, the most responded, the most discussed, and so on. Rankings based on categories and across the whole platform were displayed. It was transparent. Viewers could identify which videos were more popular and decide whether or not they wanted to see them.

YouTube has evolved from passively providing transparent ranking for users to decide, to actively guiding viewers to specific content. In this way, YouTube has started to shape content in the platform.23 The platform has developed automatized algorithms that personalize the videos proposed to each individual viewer.24 The parameters introduced in the algorithms are not transparent. Given that YouTube is a commercial enterprise, and based on the understanding of its business model, it can be inferred that the platform has an incentive to attract the attention of the viewers for the longest possible time. In this way, the platform can sell more ads to advertisers. Based on the knowledge of each user, YouTube can guide them to content that they will like, reducing the element of chance and creating a bubble of safe and comfortable content for the viewer.

Monetizing eyeball time in the platform is the ultimate driver of the algorithm. Increasing time spent in the platform is a means, not an objective in itself. The real objective is to maximize advertising revenue; this is the objective that drives the type of content proposed to viewers and is the interest of the large advertisers that shapes the content proposed to viewers. Of course, large advertisers want to have illegal content such as terrorism and pedophilia excluded from the videos where their ads are displayed. However, large advertisers are also sensitive to potential boycotts as their ads are shown in videos that may be perfectly legal, but not in line with the sensitivity of pressure groups.

YouTube has developed specific tools to limit the visibility of perfectly legal but potentially risky content.25 In 2017, following a bitter campaign by newspapers in the UK and the US, YouTube implemented a new policy. Only videos with more than 10,000 views would be eligible to receive a share of the advertising revenue. “This new threshold provides us enough information to determine the validity of the channel,” explained YouTube. In the same way, some content is just permanently “demonetized,” in the sense that it is allowed on the platform, but no ads are shown and no payments are made to the content producer, regardless of the number of viewers. Human reviewers have been introduced, as well as an appeals procedure against demonetization decisions. The most popular YouTuber, PewDiePie, was expelled from the Google Preferred program and his series in YouTube Premium was terminated, after anti-Semitic references were identified in some of his older videos. Restrictive policies have included all kinds of hate speech, but also conspiracy theory content and fake news, following so-called “targeted flagging” by users. The ultimate tool is platform exclusion, as happened in 2018 to Alex Jones, one of the most popular conspiracy theory promoters.

YouTube’s curating policies demonstrate the central role of the platform to shape the ecosystem and to determine the nature and the identity of the services providers who will succeed in the platform. Most of the curating policies implemented by the leading platforms appear to be necessary and beneficial for the community. However, as platforms concentrate and gain market power, transparency in the curating policies of the platforms, particularly in the algorithms, and protection for the rights of the content providers seem to be necessary.

European authorities have decided to increase the regulatory obligations on the new category of “video-sharing platform providers.” Amendments to the Audiovisual Media Services Directive adopted in 20182 have imposed new obligations on platforms such as YouTube and Facebook. Such platforms are not subject to the same regulatory obligations as TV networks and platforms that develop their own content or propose content on demand (Netflix). While the platforms like YouTube and Facebook have no editorial responsibility, they are obliged to take appropriate measures to protect minors, to protect the general public from content containing incitement to violence or hatred against minorities and from content which constitutes an offense such as terrorism and pedophilia.

They have to establish age verification systems for users and parental control systems, mechanisms for users to report or flag content, systems for explaining to users what effect has been given to the reporting, flagging and procedures to handle users complaints, among others.

 
Source
< Prev   CONTENTS   Source   Next >