My name is Andrew Janis, and you haven’t seen me in this space before because I’m usually wearing the marketing hat at Evantage. That often means I’m knee deep in marketing analytics and occasionally working with someone on our UX team to analyze
usability test results in the context of real-world behavior.
There’s a lot to be gained from pairing the qualitative, user-centric viewpoint of a situation with a quantitative, data-centric viewpoint of the same situation. So much so that I’m speaking about it at eMetrics San Jose next week. My talk is called “Provide Narratives, Not Numbers,” but simply put, it can be explained this way: use data to explain the customer experience.
Here’s the thing: my talk is designed for web analytics professionals, who
are starting with the data, and trying to interpret it for non-web analysts. However, the people reading this blog are primarily UX professionals, so a different angle is called for. Today, I’m going to talk about web analytics data that can
help you, the UX professional, make your case. As it turns out, there are a number of straightforward web analytics metrics that can help you make your case.
One quick note: the point of using these metrics is NOT to take a point of view and then go find supporting evidence in the data. That’s exactly what not to do. However, these are all great metrics to support your qualitative data gathering. Or alternatively, if you have the data at the beginning of a project, you can use it to guide your investigation.
Without further ado, here are five easy metrics for you, the UX professional:
Bounce rate measures the percentage of visits which consist of a single page view. It represents a person who visits your site, then leaves without clicking another link. A 20-30% bounce rate is typical; anything higher should immediately raise an eyebrow. A high bounce rate is an instant clue that your landing page is not connecting with users and drawing them into the site. Hint: look at bounce rates for specific landing pages, not the site overall. If possible, look at bounce rates for specific user groups too. A good way to segment user groups is by referral source. For example, you might be able to identify specific referring sites as sending kid traffic and other sites as sending parents.
High page reload numbers on a page where users are required to enter information often indicate that they are having trouble entering that information and the page is reloading with an error message.
This is a great metric to tell you when your page isn’t working. Typically you would look at abandonment rate as part of a conversion funnel. Any page in that funnel that has a high abandonment rate has a problem that you need to solve. Keep in mind that there are certain cases or pages where you want people to exit the site (or at least that a high abandonment rate is ok. A prime example
would be a “Thank You” page after a form submission) If you have field level abandonment data, which tells you exactly where the person left, all the better.
Time on Page
Time on page is a double-edged sword and can mean different things depending on the situation. If you have a page heavy with text, high time on page might be OK, if your goal is to have people read your content. If your page has a call to action and people are spending lots of time, it might point to a problem, especially if you can see that people aren’t clicking on the call to action.
Click Path Data
Click path data is often difficult for web analysts to derive actionable insights from. However, as every user experience designer knows, what happens in the lab doesn’t always mirror real life. Click path data can be a good reality check, to figure out whether what people are doing in the lab is what they do when no one’s watching. Case in point: we recently conducted user testing on a site where people were asked to complete a specific task. In testing, 7 of 8 said they would click the “instructions.” In real life, we could see that less than 5% of users clicked the “instructions” link.
So there you have it. What are some of the metrics that YOU use to validate and support your findings?