If you’re changing content as part of a whole page design, then you won’t be able to test the effect of changing the content in isolation. But when you’re only making changes to content, with the right analytics and behavioural insights you should be able to measure the effect of that change.

How to measure content
Visualising results will help you tell your story. Photo by Carlos Muza on Unsplash

What you measure will be determined by the impact you’re trying to have. And the mistake people often make is not setting a target for their content up-front. Without a target metric or purpose in mind, it’s impossible to review your content to see whether it did what you wanted it to. Firstly because you don’t know what you wanted it to do, and secondly, because you didn’t work out upfront what you could measure, and how.

To set good targets upfront, you need to know the problem you’re trying to solve. Once you know that you can think about how you’ll know when you’ve solved that problem. You’ll also need the capability to measure. Without access to data or analytics it becomes much harder (but not impossible) to measure effectiveness. Ideally you also need to combine quant data with more qualitative insights. Data will tell you what but not why.

Here are a few ideas of how to measure content for different problems:

1. Awareness

In this instance you want people to be made aware of your product or service or draw a user’s attention to particular important information. The key metric here is visibility — you need more people that see or find your content. The isolated content changes you could make to drive traffic include:

  • Navigation labels
  • Page headers
  • Meta-data
  • Clearer preceding call to action labels
  • Hyperlinks or related links
  • Clearer categorisation of help content (if your content is help-related)
  • Exposed or surfaced help
  • Social sharing links

How will you know if it’s worked?

Metric-driven measurement for this could include page-views, click-throughs, site traffic, social shares, fewer calls to your contact centre or live chats on a particular topic. If it’s navigation labels you’ve changed, you could also run tree tests.

For more qualitative data you could run some basic guerilla research — giving participants a task to find a particular piece of information. You could also ask your customer call centres to anecdotally record whether related queries or complaints have increased or decreased after a set period of time. Depending on the volume of traffic your site receives it may take a while to see strong results this way.

2. Engagement

Better engagement isn’t just about time spent on a page — there are many other ways we can measure it. Often when we want to increase engagement it’s because we want to keep people onsite for longer, so they’re more likely to buy or stay loyal to a brand. But sometimes we just need to make sure they’re reading what we want them to read because it’s important, or we want them to interact with something or provide information. Changes to content might include:

  • Product or pricing information
  • Help content
  • T&Cs or legal information
  • Forms
  • Article content
  • Proposition messaging
  • Videos
  • Social media posts

How will you know if it’s worked?

Metrics you might look at could include page dwell-times, video views, likes or shares, page bounce rates (you would be hoping these decrease). You should also look at how many users are clicking through to other parts of your site (vs leaving the site), form completions, or data or comment submissions.

Qualitative insights are always useful, even when you do have data, to understand user behaviour. Consider usability testing, or reviewing heatmaps, which will help you see which bits of content users are spending the most time on. But use heatmaps with caution as they don’t often tell the full story and not all analytics tools let users know they’re being monitored.

3. Comprehension

It’s all very well driving more traffic and engagement, but if users don’t understand what they’re reading, then they won’t understand what they’re buying or signing up to either. Confusion can lead to a loss of custom, complaints, and negative sentiment.

Content changes can be tested on many parts of your site to improve understanding, such as:

  • Product or pricing information
  • Help content
  • Legal wording and T&Cs
  • Special offer wording
  • Questions in forms
  • Service emails

How will you know if it’s worked?

From a data point of view you could monitor calls to your call centre, live-chat starts, on-page feedback submissions and click-throughs to help. You’d expect these to decrease. You might also be able to look at brand sentiment through social media channels or complaints data.

Testing comprehension through usability testing is great. Set participants a task, then once they’ve completed it ask questions to see what they understood from the content your testing. Cloze tests or highlighter tests are also a basic way to test comprehension.

AB test
A/B tests are a great way to test content changes (image courtesy of FezBot2000 on Unsplash)

4. Conversion

Copy changes can be one of the most effective ways to increase conversion. Here are some of the things you might want to test or optimise:

  • Call to action labels
  • Marketing proposition messaging
  • Price and product information
  • Help copy in forms
  • T&C wording

Help copy in forms and T&Cs are often neglected, but if a user has anxiety about not knowing what’s expected of them, or what they are signing up to, they can drop out at the final hurdle. Making sure your copy is clear and honest is the best way to build trust and improve conversion.

How will you know if it’s worked?

Click-throughs, page progression, quotes or leads, sales, renewals, and newsletter sign-ups are all quite standard metrics to use depending on the copy you’re changing.

Qualitative insights are much more useful to see exactly which copy is causing a user to drop out of a journey. For example on web form, identifying the field or bit of copy that’s causing users to panic or feel frustrated and leave the page, will give you the insight you need to iterate that copy. This can be done by watching users interact with the site, as well as by looking at tools such as heatmaps.

Conversion is often driven by a combination of factors, but usability, comprehension and task completion are important elements, and working on these metrics can have a positive impact on conversion rates too.

Content conversion

5. Task completion

Getting someone through to their task as quickly as possible can be a key goal for many sites who want to ensure an efficient and effective user experience. The balance you must find is one between speed and comprehension. Some experiences shouldn’t be fast; for example purchasing insurance too quickly could lead users to feel wary about having everything covered. If you’re aiming for speed your content needs to be super-clear and transparent. The other way to measure task completion is whether someone could complete their task or not. Content you might test for task completion could be:

  • Online self-service (for example amending account details)
  • Purchase or renewal journey
  • Navigation labels to help users find information (for example help content or a phone number)

How will you know if it’s worked?

If you’re looking at completion time, as with all metrics, you’ll need a benchmark to start from, which means you’ll need to have gained some ‘average time to task’ metrics through user-testing. This gives you a way to assess whether your content change has improved this time. Some great work has been done by the GDS team on benchmarking.

If you don’t have metrics to start from, then usability testing is your best way to test, but you’ll be looking at whether the participants are able to complete the task at all, which arguably you could learn from site metrics.

How you actually ‘test’ your content may depend on what tech capabilities you have.

You might be able to A/B test different versions of the same copy element (so some of your users see one version and the rest see the other). If you can’t run such tests, you might just have to make a change, monitor the results over a set period of time, then revert or iterate if you don’t get your required outcome.

One of the great things about content is that it’s often a relatively easy and cheap way to make small changes but see big results. If you do see big results, then share them widely. To create content advocates, you need to demonstrate the value — so create reports, dashboards, or a ‘successful content tests’ Workplace channel…whatever it takes to tell your story.

This post was originally published on Medium.