EU lawmakers say it’s time to go further on tackling disinformation

A major European Commission review of a Code of Practice aimed at combating the spread of disinformation online has concluded the self-regulatory instrument is failing to deliver enough transparency or accountability from the tech platforms and advertisers signed up to it.

EU lawmakers suggested today that a swathe of shortcomings identified with the current approach won’t be fixed without legally binding rules.

Although how exactly they will seek to tackle disinformation in forthcoming legislative packages, such as the Digital Services Act or the European Democracy Action Plan, remains to be seen.

Signatories to the Code of Practice on Disinformation include: Facebook, Google, Microsoft, Mozilla, TikTok and Twitter, along with the trade association representing online platforms (EDIMA).

A number of trade associations representing the advertising industry and advertisers are also signed up (namely: the European Association of Communications Agencies and the French, Czech, Polish and Danish national associations affiliated with it; the IAB Europe; and the World Federation of Advertisers plus its Belgian national association, the Union of Belgian Advertisers).

“The Code of Practice has shown that online platforms and the advertising sector can do a lot to counter disinformation when they are put under public scrutiny. But platforms need to be more accountable and responsible; they need to become more transparent. The time has come to go beyond self-regulatory measures. Europe is best placed to lead the way and propose instruments for more resilient and fair democracy in an increasingly digital world,” said Věra Jourová, VP for values and transparency, commenting on the assessment of the code in a statement.

In another supporting statement, Thierry Breton, commissioner for the Internal Market, added: “Organising and securing our digital information space has become a priority. The Code is a clear example of how public institutions can work more efficiently with tech companies to bring real benefits to our society. It is a unique tool for Europe to be assertive in the defence of its interests and values. Fighting disinformation is a shared responsibility, which the tech and advertising sector must fully assume.”

Must do better

On the positive side, the Commission review argues that the two-year-old Code of Practice on Disinformation has enabled “structured cooperation” with platforms which has boosted transparency and accountability (albeit, not enough), as well as providing a “useful” framework to monitor them and push for improvements in their policies on disinformation.

And, indeed, the Commission credits the Code with prompting “concrete actions and policy changes by the platforms aimed at countering disinformation”.

However the list of shortcomings identified by the review is long.

This is not surprising given the degree of wiggle room inherent in the approach, as we said at the time it launched. tl;dr: Getting adtech giants to agree to broad-brush commitments to do a vague something about a poorly defined set of interlinked issues vis-a-vis information shaping and manipulation gave plenty of space for platforms to cherry pick reported actions to make a show of ‘complying’. The Code also contains pretty glaring gaps.

Two years on the Commission agrees more effort is needed and commissioners said today it will take steps to address shortcomings in forthcoming legislative, without offering further detail.

The assessment groups the Code’s shortcomings into into four broad categories:

  1. inconsistent and incomplete application of the Code across platforms and Member States;
  2. lack of uniform definitions;
  3. existence of several gaps in the coverage of the Code commitments;
  4. limitations intrinsic to the self-regulatory nature of the Code;

Among the laundry list of problematic issues it identifies are:

  • Technical disparities in what’s offered across EU Member States
  • Failure to distinguish between measures aimed at scrutinising ad placements on platforms’ own sites vs third party sites
  • No sign of consistent implementation of restrictions on ad accounts promoting verified fake/misleading info
  • Questions over how well platforms are collaborating with third-party fact checkers and disinformation researchers
  • Questions over whether limits on ad placement are being enforced against websites that blatantly purvey misinformation
  • A lack of effective and joined up participation across the adtech chain to enable enforcement of ad placement limits against bad actors
  • Insufficiencies of implementation in ‘ad transparency’ policies for political ads and issue ads
  • Failure to ensure disclosure labels remain on ads that are organically shared
  • Limited functionality of the APIs offered for ad archives, questions over the completeness and quality of the searchable information
  • A lack of uniform registration and authorisation procedures for political ads
  • Reporting on bot/fake account activity being aggregated and provided at a global level, with no ability to see reports at EU Member State level
  • A lack of reporting on bot/fake account activity involving domestic not foreign actors
  • Insufficient information about tools intended to help users find trustworthy content, including a lack of data to demonstrate efficacy
  • Lack of a universal, user-friendly mechanism for reporting disinformation and being adequately informed of the outcome
  • A lack of consistent use of fact-checkers across platforms and in all EU Member states and languages

The assessment also identifies gaps in what the code covers — such as types of manipulative online behavior that fall outside the current scope (such as hack-and-leak operations; impersonation; the creation of fictitious groups or fake parody accounts; the theft of existing artefacts; deep fakes; the purchase of fake engagements; and the involvement of influencers).

The experience gained through the monitoring of the Code shows that the scope of its commitments may be too narrow,” it suggests, adding: “The vagueness of the Code’s commitments in this respect creates serious risks of incomplete action against disinformation.”

Microtargeting of political ads is also discussed in the assessment as a gap.

On this it writes:

The Code presently does not prohibit micro-targeting or restrict the range of targeting criteria that platforms may offer with respect to paid-for political content, although one of the objectives set out for the Code in the April 2018 Communication was “restricting targeting options for political advertising.” Recent research shows that the vast majority of the public are opposed to the micro-targeting concerning certain content (including political advertising) or based on certain sensitive attributes (including political affiliation). Further reflections in this area will be pursued without prejudice to any future policy on micro-targeting of commercial ads.

The Commission also notes that the European Cooperation Network on elections is “currently investigating the issue in depth”. “This work will inform the European Democracy Action Plan, which will look into the issue of micro-targeting in political campaigns to ensure greater transparency on paid political advertising,” it adds.

The review also points to a major gap around the fairness of online political ads — given the lack of rules at EU level establishing spending limits for political advertising (or “addressing fair access to media for political parties or candidates participating in the elections to the European Parliament”), combined with a variable approach from platforms to whether or not they allow political ads.

“The issue of online application of laws in the electoral context and their modernisation is addressed in the work of the European Cooperation Network on Elections. The European Democracy Action Plan will look into solutions to ensure greater transparency of paid political advertising as well as clearer rules on the financing of European political parties,” it adds.

Another major deficiency of the code the Commission assessment identifies is the lack of adequate key performance indicators to enable effective monitoring.

The Commission further identifies a number of inherent limits to the self-regulatory approach — such as a lack of universal participation creating inequalities and variable compliance; and the lack of an independent oversight mechanism for monitoring performance.

It also highlights concerns about risks to fundamental rights from the current approach — such as the lack of a complaints procedure or other remedies.

In conclusion, the Commission suggests a number of improvements for the code — such as “commonly-shared definitions, clearer procedures, more precise commitments and transparent key performance indicators and appropriate monitoring” — as well as calling for further effort to broaden participation, in particular from the advertising sector.

It also wants to see a more structured model for cooperation between platforms and the research community.

“At present, it remains difficult to precisely assess the timeliness, comprehensiveness and impact of platforms’ actions, as the Commission and public authorities are still very much reliant on the willingness of platforms to share information and data. The lack of access to data allowing for an independent evaluation of emerging trends and threats posed by online disinformation, as well as the absence of meaningful KPIs to assess the effectiveness of platforms’ policies to counter the phenomenon, is a fundamental shortcoming of the current Code,” it notes. 

“A structured monitoring programme may constitute a pragmatic way to mobilise the platforms and secure their accountability. The programme for monitoring disinformation around COVID-19 foreseen in the June 2020 Communication will be an opportunity to verify the adequacy of such an approach and prepare the ground for further reflection on the best way forward in the fight to disinformation,” it adds.

A consultation on the Commission’s European Democracy Action plan concludes next week — and that’s one vehicle where it might seek to set down more fixed and measurable counter-disinformation requirements.

The Digital Services Act, meanwhile, which is slated to tackle platform responsibilities, is due in draft by the end of the year.

First COVID-19 disinformation reports

Also today the Commission has published the first monthly reports of action taken against coronavirus-related disinformation by Facebook, Google, Microsoft, TikTok, Twitter and Mozilla.

In June it pressed platforms for more detailed data on actions being taken to promote authoritative content; improve users’ awareness; and limit coronavirus disinformation and advertising related to COVID-19.

“Overall, they show that the signatories to the Code have stepped up their efforts,” the Commission writes today, noting that all platforms have increased the visibility of authoritative sources — by giving prominence to COVID-19 information from the WHO and national health organisations, and by deploying “new tools and services to facilitate access to relevant and reliable information relating to the evolution of the crisis”.

Although here, too, it notes that some of the products or services were not deployed in all EU countries.

“Platforms have stepped up their efforts to detect cases of social media manipulation and malign influence operations or coordinated inauthentic behaviour. While platforms detected a high number of content including false information related to COVID-19, they did not detect coordinated disinformation operations with specific focus on COVID-19 run on their services,” it adds.

On ads the third party sites purveying COVID-19 disinformation it says the reports highlight “robust actions” taken to limit the flow of advertising, while providing free COVID-related ads space to government and healthcare organisations.



from TechCrunch https://ift.tt/3bQPA0b
Previous
Next Post »