Existing approaches to combating false information

Curbing the spread of false information in India has been a daunting and challenging process. One of the reasons for this is that the false information ecosystem involves a number of actors who each operate in a strategic manner to further their own goals. In comparison, the entities working to limit the spread of misinformation and disinformation are often under-resourced and lacking in strategy. In what follows is a breakdown of the key actors in this ecosystem, the approaches they have adopted to combat false information and recommendations on how these efforts can be improved.

The government of India and Indian political parties

As previously outlined, the government and political parties in India both routinely play a role in creating and disseminating false information. This is often done in order to galvanise supporters along partisan lines. However, these entities have also begun exploring potential solutions to this false information crisis.

At a local level, in 2018, district officials in the city of Kannur in the state of Kerala introduced disinformation workshops in schools. These 40-minute long classes have been launched in approximately 150 of its 600 government schools, and they focus on educating students on how to identify and combat misinformation. This programme is the first of its kind in India, and was created after Kannur suffered from a string of viral misinformation campaigns related to vaccinations and an alleged spread of the Nipah virus through poultry (Biswas 2018). These efforts are notable but currently are only taking place at a small scale in certain parts of the country. If the government wants to have a stronger impact, such education programmes should be launched nationwide.

At a national level, the Indian government has considered passing legislation in order to criminalise and curb the spread of false information. As internet platforms have struggled to keep the spread of false information in check, the government has warned that they may face legal action (Bengali 2019). In the United States, where many of these platforms are headquartered, the government is restricted by the First Amendment from dictating how platforms moderate and manage content online. However, the Indian government is able to impose such legal rules. Yet, there is a danger that such rules will incentivise censorship and threaten freedom of expression online. In December 2018, the government proposed amendments to rules under Section 79 of the Information Technology (IT) Act, 2000. These amendments would require internet platforms to proactively identify and remove “unlawful information or content”. The definition of “unlawful information or content” is broad, and it was written to include false information (Bajoria 2019). However, this definition could also include content that the government finds to be unfavourable, such as content posted by dissenters or activists. As a result, many critics have raised concerns that these amendments could be used to infringe on freedom of expression.

The proposed rules also seek to require platforms to be able to “trace” the origin of content shared on their platforms. For some platforms that enable the public distribution of content, such as Facebook and Twitter, this may be easy. However, for platforms that offer end-to-end encrypted messaging services, such as WhatsApp, Signal and Telegram, this is not currently possible (ibid). In order to comply with these requirements, these services would have to undermine their encryption services, which is one of the key features and selling points of their products. Encryption enables users to communicate in a private and secure manner and is integral to the safety of individuals such as journalists, political activists and human rights defenders. If these platforms were to undermine their encryption, they would therefore be putting thousands at risk (see note in references for International Coalition of Organizations). The alternative to this is for these platforms to refuse to comply with the requirements. However, this would likely result in them having to leave the Indian market altogether (Bajoria 2019).

Further, the proposed rules would enable the government to obtain user data from internet platforms, often without a court order. Over the past few years, the Indian government has demonstrated a trend of requesting increased user data from internet platforms (The Economic Times 2019), and as a result these proposed amendments have sparked concerns that the government is trying to incorporate social media platforms into its mass surveillance efforts (Patil 2019). The amendment was initially tabled in March 2020 and is currently awaiting final approval by the Ministry of Electronics and IT (Q, 2020).

The government has also instituted numerous internet shutdowns in order to prevent and quell violence that has erupted as a result of false information online. In 2019, India had the highest number of network disruptions ordered by the government, with 106 reported internet shutdowns. Thus far in 2020, there have been 58 reported internet shutdowns in the country (Internet Shutdowns n.d.). However, there is no proof that internet shutdowns actually help curb violence or the spread of false information. Rather, these shutdowns have cost the economy billions of dollars in productivity losses over the years (Sushma 2018) and have prevented the free flow of information in certain regions in the country. This has curtailed the freedom of expression of numerous individuals and has prevented platforms and authorities from combating false information in real time (Bajoria 2019).

Fact-checking organisations

As misinformation and disinformation campaigns have grown in scope and scale in India, a number of fact-checking organisations have been established to help review and verify content. These include Alt news, a non-profit organisation that uses online video verification and social media tracking tools to debunk false content, and Boom, a Mumbai-based fact-checking agency. News outlets such as the BBC have also developed their own in-house fact-checking departments. The BBC’s department operates a WhatsApp tip line through which users can send potentially false content for review and verification (Phartiyal and Kalra 2019).

However, these fact-checking services face an array of challenges. First, most fact-checking services are small, under-funded operations that employ only a few people. For example, as of 2019, Alt News employs only ten people and is able to debunk approximately four posts a day (ibid). As a result, the impact they are able to have on the overall false information environment is limited. In order for these organisations to be effective, they must be able to review and verify content with speed and at scale. Oftentimes, it takes these fact-checking entities days to respond to a user who has submitted a tip about potentially misleading content. This slow response rate can deter users from engaging with these organisations in the future. It is also challenging to evaluate and quantify the success of such fact-checking organisations. This often makes it difficult for these organisations to acquire more financial support. Additionally, in order for these organisations to be effective, users need to be aware that they exist. Currently, most of these companies advertise their services on tech blogs and through word of mouth. Going forward, they need to devote more resources towards promoting and raising awareness about their services (ibid). Finally, most fact-checking is conducted only in major languages, such as English, Hindi, Tamil, Punjabi and

Urdu (Chaturvedi 2019). However, there are over 120 languages spoken in the country, and fact-checking efforts need to include more local languages if they are going to succeed in monitoring and curbing the spread of false information nationwide (Chan 2019).

Technology companies

As misinformation and disinformation have continued to spread throughout the country, internet platforms have come under increased pressure to respond and take action.

Given that WhatsApp is a major channel for the dissemination of false information in India, the platform has come under particular scrutiny. Following numerous instances of mob violence which resulted from misinformation spread on WhatsApp, the platform introduced a number of new features which aimed to curb the spread of false information. In July 2018, WhatsApp limited the number of members a WhatsApp group chat can have to 256 (Poonam and Bansal 2019). It also introduced a limit on the number of times a user can forward a message. In India, this limit is five times (Ponniah 2019). According to WhatsApp, these changes have decreased the forwarding of messages by 25%(Bengali 2019). In addition, in April 2020, the platform further updated its rules so that “highly forwarded” messages, or messages that have been sent to five or more people, can only be forwarded to one person at a time (Newton 2020). The platform has also removed the “quick forward” button next to messages for users in India (Storyful 2018).

In order to flag the spread of potentially misleading content to users, WhatsApp introduced in-app labels on forwarded messages (Ponniah 2019). When a user receives a message that has been forwarded, it is labelled as a forwarded message in the chat screen. Additionally, when a user tries to share a forwarded message, they see a label that reads “we encourage you to think before sharing messages that were forwarded”.

Further, in 2019, the platform has also introduced a new set of privacy settings. Previously, WhatsApp users could be added to groups by anyone. This enabled political parties and other groups to easily create group chats and disseminate information to them. As per the new privacy rules, users have the ability to opt out of being automatically added into such group chats by both their contacts and users in general (ibid). However, these privacy settings are not easily accessible in the app, making it difficult for users to take advantage of this feature. In addition, users often lack a concrete understanding of what impact such features have on their overall experience on the platform. Going forward, platforms such as WhatsApp should invest more in rolling out in-app user education features, such as labels and pop-ups, that inform users what controls they have available to them and how they can access these settings.

Because WhatsApp offers encrypted messaging services, which as discussed are integral for privacy and security, the company cannot review the content of individual messages. Instead, the platform relies on users to flag potentially suspicious or misleading content for review. However, in order for users to do this effectively, users need adequate social media hygiene and digital literacy practices. In order to help develop these skills, WhatsApp began hosting digital literacy workshops across the country. However, these workshops have reached only a few thousand users, and as a result they have had a minimal impact on the overall false information landscape (Bengali 2019). In 2018, WhatsApp has also launched a nationwide advertising campaign in ten languages. These included three one- minute long video advertisements which aimed to educate users about the false information environment (Shekhar 2018), full-page advertisements in numerous Indian newspapers highlighting how to “fight false information” (Storyful 2018) and radio advertisements (Phartiyal and Kalra 2019). These advertisements have reached hundreds of millions of Indians (Ponniah 2019).

In addition, WhatsApp deploys artificial intelligence tools in order to help identify false information and fabricated news. According to the company in 2019, these efforts have resulted in the suspension of over 6 million user accounts (Devlin and Johnson 2019), particularly accounts that engage in “bulk or automated messaging” (Poonam and Bansal 2019). Aside from occasionally reported statistics in blog posts or press materials, however, there is little transparency from WhatsApp around the scope and impact of these takedown efforts. In addition, Facebook, WhatsApp’s parent company, does not issue a transparency report outlining the scope of any removals by WhatsApp (Facebook n.d.).

Finally, in April 2019, WhatsApp launched a new project called Checkpoint (Ponniah 2019) which established a tip line in collaboration with New Delhi- based startup Proto, Meedan and Dig Deeper Media (Ghoshal 2019). Users can send in forwarded messages, rumours and suspicious messages to this tip line and in response they will receive a response explaining whether the information in the tip is true, false, misleading, disputed or unverifiable, along with any other relevant information. User tips can include text, pictures, links and videos in English, Hindi, Tclugu, Bengali and Malayalam (ibid). However, Proto’s primary goal with this project is to study the false information ecosystem in India. As a result, providing timely responses is not a priority (Ponniah 2019). In addition, WhatsApp has also launched a tip line with BOOM. As of 2019, the line only receives 20-30 tips a day (Bengali 2019).

WhatsApp’s parent company, Facebook, has also taken steps to curb the spread of false information on its platform. Like WhatsApp, Facebook has formed numerous partnerships with fact-checking organisations and new agencies including BOOM and news agency Agence France-Presse (AFP) in order to review and verify content (Shekhar 2018).

The platform has also ramped up its efforts to remove inauthentic content, hate speech, and false information from the platform. According to Facebook, content and accounts that violate Face-book’s content policies, also known as its Community Standards, are either removed from, labelled or downranked on the platform (Singh and Bagchi 2020). However, civil society groups and experts around the world have criticised the company for inconsistently enforcing these policies (Pcrrigo 2020).

Facebook publishes a Community Standards Enforcement report, which highlights the scope and scale of its content moderation efforts. This report includes data on the number of fake accounts removed and the amount of hate speech and spam removed. However, it does not provide a clear breakdown of how much of this content was misinformation or disinformation. In addition, like many other large internet platforms, Facebook has increasingly begun adopting artificial intelligence and machine-learning based tools to aid its content moderation operations. Research has indicated that automated tools are unreliable when it comes to identifying and removing content, especially content such as false information, which is not easily and clearly defined, and which often requires contextual understanding and subjective decision-making. This raises significant freedom of expression concerns as the use of these imperfect automated tools has resulted in numerous erroneous takedowns of user content, as well as many instances of wrongful account termination or suspension. Companies that deploy automated tools for these content moderation purposes should therefore ensure that they keep humans in the loop when moderating such categories of content. These platforms should also institute a robust, timely, and easily accessible appeals process for their users. This will enable their users to seek remedy for any wrong- fill content removals and account suspensions (S. Singh 2019).

In addition, in 2018 Mark Zuckerberg announced that the platform was going to be focusing on algorithmically promoting and amplifying “meaningful and “authentic” interactions between users rather than advertisement and popularity- driven content (Reuters 2018). As part of this effort, Facebook disclosed it was downranking content that it does not consider “authentic and meaningful” on the platform’s news feed, including false information. According to the company, this has resulted in an 80%reduction of the circulation of debunked and false posts across the platform (Phartiyal and Kalra 2019). However, there is little transparency around how much of this content is circulated or viewed by Indian users. Finally, the platform has also introduced new features, such as issuing warnings to users who tried to share content that had been debunked by its fact-checking partners (ibid).

Companies such as Twitter and Google have also ramped up their efforts to block fake accounts and stem the spread of false information on the platform. In addition, in 2018, Google partnered with fact-checking organisations to train over 8,000 Indian journalists in the crusade against false information (“Google to Train 8,000” 2018). These platforms also worked with the ECI in the lead up to the 2019 elections to monitor political ads and block defamatory, objectionable and misleading content (Shekhar 2018).

Users

Users play a central role in the false information ecosystem as they are responsible for the mass dissemination of content that they receive. Research has suggested that because there is declining trust in the media and other institutions, citizens have turned to alternative sources of information such as social media platforms and their peers. Many users have a strong sense of trust in those in their social circles, and they are therefore often unwilling to believe that information shared by these individuals is wrong (Biswas 2018). In addition, as previously mentioned, it can be particularly difficult for first time internet users to decipher fact from fiction in online information (Doshi 2017).

Experts such as Samir Patil, the publisher of Scroll.in, an Indian news portal, have suggested that researchers and policymakers should turn to existing models for citizen education to tackle the spread of misinformation and disinformation in India (Patil 2019).

Investing in citizen engagement and education is going to be increasingly important for a number of reasons. First, content distribution and information access will continue to become cheaper. As a result, misleading information will continue to spread with greater speed and scale. Second, bad actors are continuously refining their tactics in order to run more complex misinformation and disinformation campaigns and in order to manipulate social media platforms in unique and influential ways (ibid). Citizen education programmes therefore need to emphasise the development of strong digital and media literacy skills so that users are well equipped to decipher fact from fiction and to help prevent the further spread of harmful misinformation. These campaigns should also provide users with guidance on how to seek out more reliable information (Chan 2019).

 
Source
< Prev   CONTENTS   Source   Next >