18 Nov 21

Twitter's misinformation labels

Twitter's misinformation labels

In serving the public conversation, Twitter's goal is to make it easy to find credible information from authoritative sources on Twitter and to limit the spread of potentially harmful and misleading content. In 2020, before I joined, Twitter created a visible annotation on Tweets known to contain misleading information. This is how the labels looked initially:

Original label design, as of March 2020

Labels provide a way for Twitter to move beyond the binary of leaving harmful content up or taking down and address potentially misleading information in a way proportionate to the severity of harm it poses. When I joined the team, I was tasked with helping improve the design of the labeling system to reach that goal.

The original design had key problems we were trying to solve:

  1. All labels applied looked the same, even though internally they're applied to categories of different levels of harm. We wanted to make these categories transparent to readers.
  2. The design wasn't flexible enough for nuance. Labels were rendered just as one string of blue, bold text, and the content team didn't have much room to experiment with copy hierarchy.
  3. The labels didn't visually fit our evolving design system.

Small label, huge design problem

Even though the labels look like a small surface on the product, the complexity behind getting them just right is surprising. This redesign was a huge cross-functional effort, with many key players across design, content strategy, policy, product, and engineering.

We explored a spectrum of designs (that, unfortunately, I can't share here!), and evaluated each of them in about a dozen rounds of critique and iteration.

We took some of the designs to qualitative research and got positive feedback on the following designs. Users felt that these were more clear, transparent, and helpful than the original.

Alternatives that got positive feedback in qual research.

AB Testing

We ran an AB test in production, for millions of customers, with control, A and B variants. Here's the official Tweet announcing the experiment:

After a few months of running the experiment, we found that the A/B test was a success:

  1. The new designs improved all metrics we cared about, compared to the control.
  2. Surprisingly, the design without background colors didn't perform as well as the one with transparent backgrounds.

With the new confidence in the labels, we decided to roll out the new labels to all users on the platform. Here's the announcement Tweet:

Results

  1. The new designs improved engagement with the curated content.
  2. The new designs reduced engagement with misleading content.
  3. The new designs more transparently categorize the labels according to potential harm.

Next steps

Even though this redesign is more successful than the original one, there is still room to improve: one common feedback is that the new component we designed is too similar to Quote Tweets and other Tweet attachments, and could be hard for users to tell them apart. We're already planning up a new round of iteration for 2022, and I'll update this post as soon as the results come out.

About

This website was built using Obsidian, Eleventy and Vercel.
The text is set in Untitled by Klim Type Co.