Blogs

Newsroom Innovation - Best Practices for Measuring Success Through Survey Analysis

By Lynette Chen posted 11-17-2017 04:27 PM

  

Unlock the science behind obtaining audience feedback, and discover what they really thought of your innovation project.

The purpose of many newsroom innovation projects, like those the Guardian Mobile Innovation Lab carries out, is to test out new formats and features that deliver news in new and better ways. To measure the success of these projects, it’s important to understand whether or not people engaged with the format in the way you intend them to, and also to understand what they thought about the new experience. A great way to get that understanding is to ask your users for feedback, which the mobile lab does at the end of each experiment by sending them a survey.

To help ensure that their surveys capture the data necessary for proper analysis, the lab partners with MaassMedia, the independent, Philadelphia-based specialty analytics consultancy where I am a Senior Digital Analyst.

In this article, we share best practices we’ve developed for capturing the data essential to getting meaningful insights and making actionable recommendations based on responses.

It’s important to note that newsroom innovation projects require a unique feedback survey structure, mainly to account for the introduction of new technology into the user experience, as well as to account for the timing of news events and the range of reactions users might have to coverage of an event. Over a span of more than 15 experiments, we’ve worked with the lab to continuously improve and iterate on the feedback survey format.

Each survey we design together includes a combination of questions, some of which appear in all surveys, and some that are unique to each experiment. The handful of consistent questions in every survey allow us to compare results among a variety of experiments. The questions that are unique ask for in-depth feedback on users’ satisfaction with and understanding of the new features and functionalities introduced in each experiment.

***

Best Practices for Survey Analysis

1. Have a few consistent, high-level questions that are directly tied to your team’s goals

It is important to include a consistent set of questions in each survey that directly tie to the team’s goals. We suggest using a dichotomous (yes or no) or numerical scale answer choices so the results can become a quantifiable KPI measured over time or across different experiments.

For the lab, a key goal of their experiments is to understand whether or not audiences have an appetite for engaging with news in new formats that take better advantage of the capabilities of mobile devices. Additionally, they want to know if the experiments provide value and are useful and interesting to users. To track against these goals, the lab asks a few questions consistently in every survey. We call them “success indication questions.”

 

 
Fig. 1. An example of a question included in a survey for one of the Guardian Mobile Innovation Lab’s experiments with notifications.

If the majority of respondents answer “yes” to the question, it strongly indicates that the experiment was a success, because they’ve indicated that they’d be interested in receiving notifications again in the future.

 

Fig. 2. Another example from a survey after a notifications experiment by the Guardian Mobile Innovation Lab.

Asking whether the alerts were useful and interesting is important to the lab because these are key elements of the experiments’ value proposition.

2. Include granular questions unique to each experiment that provide context for the consistent high level questions

Detailed questions about the content or functionality of a news experiment will provide context for the responses to the more general questions. For example, if a user answered that an experiment wasn’t useful to them, it would be essential to include questions that help reveal why. By asking these granular questions, the team can better pinpoint the underlying causes for the success or failure of an experiment.

Additionally, responses from these questions will provide context for the behavioral data points collected in your data analytics platform. Google Analytics data, for example, shows how users interacted with the experiment, but not the reasons behind that engagement. For instance, an analysis of Google Analytics data may show that few users engaged with an “Open live blog” button in an experimental web notification. Without capturing survey feedback, though, it is difficult to determine if users didn’t tap the button because they didn’t understand what it was for, or if they were already satisfied with the amount of information presented in the notification.

Asking granular questions allows you to validate or disprove any assumptions the team makes about user behavior. Below are a few example questions from a lab survey:

 

Fig. 3. The question above was asked in regards to a read-and-watch live video feature on the Guardian’s live blog for the Inauguration


Fig. 4. To understand specifically why a user may have used the “Undo” functionality offered in the live US presidential election alerts, a question was added in the survey.

3. Include demographic questions to facilitate segmentation of survey data

Segmenting users in data analysis is highly valuable because it can reveal insights about subsets of your audience. Common demographic questions include ones about age, gender, geography, and brand loyalty.

The same segments used to analyze survey data can also be applied to behavioral data to gain a holistic understanding of the experience of a subset of the audience. For example, you may want to segment survey data and Google Analytics data by age to reveal any differences in how groups interacted with an experiment.

An important demographic question the lab always asks is about loyalty, because they believe that it’s essential to consider a user’s familiarity with Guardian content when assessing success. Analyzing success based on the loyalty of the audience segment could also give teams signals about features and functionality that might appeal to their subscribers.

 

Fig. 5. By asking the question above, the data collected in subsequent questions in the same survey can be segmented by various levels of Guardian loyalty.

4. Exclude less actionable questions to balance survey length

While it is very tempting to ask users a lot of questions simply out of curiosity, there is a potential tradeoff with response rate. Lengthy surveys may be offputting to respondents, especially if they’re responding from a phone, which is why it is important to consider the marginal benefit of adding extra questions. Delivering surveys solely through mobile is also a relatively new phenomenon, and as a result there is less extensive research about the optimal survey length for mobile users versus desktop users. However, it can be expected that respondents on mobile devices have a lower tolerance for long surveys, given the small screen size and the likelihood that users will be in transit.

There has been some research on the ideal survey lengths for mobile users though: based on results from more than nine million surveys, On Device Research recommends that the optimal survey length for mobile users is 15 questions. Each question beyond the 15th decreases the response rates by 5–10%. If you find that you need to ask more questions that tie to your goals, a better option would be to deliver a follow up survey to a more targeted group of respondents from the first survey.

We analyzed ten lab surveys, and our initial findings appear to validate the recommendations of On Device Research. Looking at the graph below, the surveys with more than 15 questions formed a cluster that had lower completion rates than those with less than 15 questions.

 

Fig. 6. A high survey completion rate is desirable because the larger the number of respondents, the more representative the results will be. There are numerous tools and calculators to help determine your ideal sample size by taking into consideration the population size, margin of error, and confidence level.

It’s important to note that unlike other kinds of surveys, such as those used for gauging customer satisfaction, which can be distributed continuously over time until reaching your ideal sample size, deploying feedback surveys for news-driven innovation experiments is a bit different. The lab’s surveys are only delivered to participants once, shortly after the end of the news event that has driven the experiment. Still, even a small number of survey responses can help aid the team understand of the users’ experience, however, it’s important not to draw any broad or permanent conclusions from survey results without a statistically representative sample size.

5. Use screenshots to clarify the questions

Users may be unfamiliar with the language or naming conventions your team has established for new features or formats. Thus, when asking detailed questions about particular aspects of an experiment, it’s helpful to include screenshots for clarification. Including screenshots will help minimize the chances of receiving inaccurate answers from survey respondents.

 

Fig. 7. The question above was asked to better understand the user experience for the lab’s Inauguration mini live player

6. Aside from free-form or conditional questions, make all questions mandatory

Not every question in a survey can be answered by all respondents. For example, a question about why a user interacted with a certain feature would not be applicable to someone who did not use it or sign up for it. Instead of making these conditional questions optional, which can muddy the insights, we recommend keeping all questions mandatory but adding an option that serves as the alternative, such as “N/A” or “I didn’t see it”. If you don’t build your questions and answers this way, you might make incorrect assumptions about why someone chose not to answer a question.

 

Fig. 8. The question above about using the Inauguration Shifting Lenses feature included, “I didn’t know how to view the other lenses” as an answer choice so we would not have to make assumptions about respondents who did not answer

7. Avoid leading language

Be cautious of phrasing questions in ways that make users more inclined to answer a certain way. For example, instead of asking “Would you like more information displayed?” a better unbiased alternative question would be “What did you think about the amount of information displayed?”. Similarly, instead of asking “Did you find the update annoying?”, you might rephrase the question to “What did you think of the update?”. Having accurate and unbiased data is essential when performing analyses that are intended to be actionable, and to provide the team with recommendations to help move the needle for future experiments.

***

Here are a few sample surveys that were sent to Guardian Mobile Innovation Lab experiment subscribers:

Olympics notifications
US Presidential Election Live Results notifications
Live video mini player for the Presidential Inauguration 

***

We hope these best practices are useful and effective for your team as you explore ways to incorporate more actionable analyses into your project lifecycles.

This post was originally published on MaassMedia's blog.


#Survey

Permalink

Most Recent Blogs

Log in to see this information

Either the content you're seeking doesn't exist or it requires proper authentication before viewing.