Welcome to our Blog

Things to avoid when designing survey questionnaires

April 27, 2021


Research findings can only be as good as the data upon which the findings are based. In turn, the data can only be as good as the data collection tools and methods. Over the years, we have had the privilege to review and work with survey questionnaires from diverse sectors and covering a variety of research topics and research objectives. In the process, we have observed some pitfalls to avoid if survey instruments are to meet the basic requirements of reliability, validity, and user-friendliness. Below, we highlight some of the pitfalls that we have observed in questionnaire design.

Double barreled questions

This where two or more constructs or phenomena are included for measurement in the same question. An example is a “yes/no” type question that is phrased as follows: “Did you report the matter to the police and have they provided updates on their investigations?”. The problem here is that some respondents may have reported the matter to the police, but the police may not have given them any progress updates. They are therefore left in limbo on how to answer the question. Depending on how the survey is programmed, some may then randomly choose either a “yes” or a “no” just to get past the question. The simple solution is to break it down into two separate questions.

Questions that require computations

Respondents must not be burdened with questions that require some form of mental or arithmetic calculation before they can provide an answer. So, instead of asking “What percentage of your monthly income are you left with after paying for basic expenses?” rather break it down into two parts. a)- “What is your monthly income before expenses?” b)- “On average, how much do you spend on basic expenses per month?”. The percentage of monthly income left after basic percentages can then be calculated at data analysis. In any case, how confident can one be that respondents are calculating the percentages accurately?

Skewed Likert scale options

Likert scales should ideally be balanced in terms of providing respondents an equal chance of choosing answer options that lie on the positive and negative ends of the scale. For example, in a question: “How worried are you about the possible impact of the COVID-19 pandemic on your business?”, the following answer options are biased towards capturing responses that show participants are “worried” about the impact of the pandemic.

  • Extremely worried
  • Very worried
  • Somewhat worried
  • Not worried at all

The scale could therefore be balanced out by using the following answer options instead:

  • Very worried
  • Somewhat worried
  • Somewhat not worried
  • Not worried at all

Mixing positive and negative statements

A commonly used approach in surveys is to measure certain factors based on a list of subconstructs that are theoretically known or assumed to constitute that factor. For example, Customer Service at a bank could be assessed by asking customers to rate the bank on the following sub-elements (statements).

  1. The bank’s services are easily accessible.
  2. The bank understands my needs.
  3. My queries are resolved quickly.
  4. The bank treats me as a valued customer.

Let us say the above statements are rated on a 5point Likert scale where 1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4= agree and 5 = strongly agree. In such a case, the overall customer service score can be computed as the average (mean) score out of 5 where the closer to 5 the mean score, the higher the customer service rating.

Now, imagine statement c) above was instead phrased as “My queries are NOT resolved quickly”. While respondents can still correctly choose the extent to which they agree or disagree with the statement, computing the overall customer service score becomes a bit challenging. This is because the ideal answer for this statement becomes 1/5 (i.e. strongly disagree) while it is 5/5 (i.e. strongly agree) for the other three statements. Mixing positively and negatively phrased statements also compounds scale reliability assessment as measured by tests such as Cronbach’s Alpha.

The list of pitfalls above is clearly not exhaustive. Best practice is therefore to pilot test survey questionnaires prior to rolling out the survey for main data collection. This is because no amount of post-data collection analysis will fix errors associated with data that was collected using a faulty tool.

Data Collection

 

Sampling methods are either probabilistic or nonprobabilistic – not quantitative or qualitative

October 27, 2020


Reference to “qualitative” versus “quantitative” sampling methods abounds in everyday parlance as well as in some anecdotal articles on research sampling methods. But, strictly speaking, sampling methods are not quantitative or qualitative, they are either probabilistic or non-probabilistic. The fact that qualitative research usually uses non-probability sampling methods does not justify referring to those methods (e.g. purposive sampling and snowballing) as “qualitative sampling” methods. It is similarly misleading to refer to probability sampling methods as “quantitative sampling” methods. Indeed, most quantitative research studies also employ non-probability sampling techniques such as convenience and quota sampling.

A study does not become qualitative or quantitative solely because of the sampling method used. Rather, the main difference between qualitative and quantitative research is that the former is largely theory building while the latter is largely theory testing. This major difference is manifested in the form of data that is collected (i.e. words versus numbers) and in the data analysis methods used (i.e. statistical versus thematic). So, the research methodology is decided first based on the study’s analytical objectives and the appropriate sampling method (i.e. the way to go about drawing and studying a fraction of the target population) is decided thereafter, depending on the profile of the target population and other practical considerations. In my view, if the target population is large and easy to access (e.g. people who buy bread) and a probability sampling technique is easy to implement, this should be done irrespective of research methodology. By the same rule, non-probability sampling (e.g. convenience sampling) is more appropriate for finite and hard to find sub-populations and can be used regardless of research methodology.

There shouldn’t be a blanket rule that precludes the use of probability sampling in qualitative research. For example, if I was conducting focus group discussions with people who shop at a given shopping mall, I could identify and recruit one or two people who shop at the mall and thereafter ask them to refer me to other people they know who also shop at the same mall (i.e. use the snowballing technique). But I could also post recruiters at the entrance to the shopping mall and instruct them to intercept every 3rd person coming through and invite them to take part in the focus group discussion (i.e. use systematic random sampling).

It would appear one of the reasons for the conceptual exclusion of probability sampling in qualitative research is that qualitative research findings are not meant for generalization to the population from which the sample was drawn. Admittedly, that is the case in most qualitative studies. But the question is – if the population of interest is finite enough (e.g. the shopping mall example given above) and a probability sampling technique can be employed to select the study participants, why can’t qualitative research findings be made generalizable too, within certain analytical parameters? Put differently, given some known advantages of probability sampling, what harm is there in adopting this sampling methodology in instances where it lends itself to qualitative studies too?

Data Collection

 

We are scoring almost 10 out of 10 on customer satisfaction!

October 23, 2020


Customer satisfaction surveys are arguably the most common type of market research undertaken by brands, products, and service providers. This makes perfect sense given that customers are the most important asset for any business and are critical for business sustainability and success.

In line with the foregoing, and as market research consultants, a message that we constantly communicate to current and potential business clients is the importance of tracking customer experience on a regular basis. One morning, it dawned on us that we were a business too and yet we had not conducted a formal customer experience study in the 5 years of our existence. Surely, we had to practice what we preached! Admittedly, we did have a good sense of what our clients felt about our services through various solicited and unsolicited feedback, but we still needed to get feedback in a more systematic and inclusive manner. Thus, in October 2020, we invited previous clients to complete an online survey to rate our services.

A total of 30 clients who had used our services in their individual capacity (i.e. as opposed to organisational representatives) completed the survey which had just three questions as below:

1) – How did you get to know about our services?

One of the most reliable indicators of service quality and customer satisfaction is when clients who have received a service go on to recommend the same service provider to others. It is therefore pleasing that as much as 60% of the clients who completed the survey had been referred to us by previous clients. The other 40% had found information about us on the Internet.

2) – How satisfied or dissatisfied were you with our services?

Almost all (97%) of the clients were satisfied with the service they had received from us. Notably, this included 2 in 3 clients (67%) who were “extremely satisfied”. Altogether, none of the 30 clients in the survey expressed any level of dissatisfaction with our services.

3) – How likely are you to recommend our services to others?

The Net Promoter Score (NPS) measures customer loyalty by asking customers how likely they are to recommend a brand or service to others. The NPS is the percentage of customers who, on scale of 0 to 10, rate their likelihood to recommend a brand or service to a friend or colleague as 9 or 10 (i.e. promoters) minus the percentage rating this at 6 or below (i.e. detractors). Customers who give a rating score of 7 or 8 are referred to as “passives”. The NPS correlates highly to customer satisfaction and customer loyalty and has also been shown to correlate with revenue growth relative to competitors. As a general guideline, an NPS of between 0 and 30 is good, 30 to 70 is excellent and 70 to 100 is simply amazing. Our NPS from the survey was indicated at 63 and is quite pleasing. This score is indeed consistent with the earlier reported prevalence of positive word-of-mouth referrals by our clients.

Conclusion

Although a cross-sectional design such as the one we used in this survey does yield valid results, it is largely retrospective. For example, some of the clients in the survey had last received a service from us 4 years prior to the survey and, as such, their feedback was susceptible to time lapse biasing factors. We therefore decided not to close the survey whose results are reported above but to convert it into an ongoing real-time customer feedback system. Through this system, we are sending new and repeat clients a digital link on which they get to rate our services immediately after a service interaction. Among the advantages of real-time client satisfaction measurement is that we get the opportunity to correct any shortcomings before we get to serve the next client and thus improve our clients’ overall service experience.

You can check real-time feedback results from our clients here.

Research Conceptualization

The importance of post COVID-19 usage and attitudes research

August 24, 2020


In market research, Usage and Attitudes (U&A) studies are a common tool that provides brands with a holistic understanding of the market in which they operate. Among other things, the purpose of U&A studies is to determine the size and value of the market for a specific product or service. The idea is to continually innovate product and service offerings, as informed by market needs and expectations, and thus sustain a competitive edge in the market.

Brands usually do not conduct U&A studies frequently. This is because consumer needs generally do not change too often, but rather evolve over time. However, there are instances when market dynamics change suddenly and drastically due to some unforeseen social, economic or such other developments. An example is the onset of the COVID-19 pandemic. The pandemic did not only disrupt health systems and economies, but also impacted consumers financially, psychologically, and behaviourally. For example, in trying to enhance their resilience against the virus, families and individuals are known to have changed their diet regimes. People changed product types, purchase frequencies, purchase channels, consumption occasions, and monthly budgets etc.

It is not known if or when the COVID-19 threat will be contained. But what is likely is that even after the epidemic has been fully contained, some of the consumer behaviours adopted during the pandemic will be retained going into the future. This could be because consumers may have accidentally found some relevance and benefits in some of these behaviours. Brands are therefore more-or-less obligated to conduct post COVID-19 U&A studies to investigate the nature and extent of consumer behaviour and other market changes occasioned by the pandemic. Such studies are important in answering the following critical questions, inter alia:

  • How has product/service consumption changed?
  • What products/services are consumers now using?
  • Why are they using those products/services?
  • How frequently are they using the products/services?
  • Which channels are they using to buy the products/services?
  • How much are they spending on the products/services?
  • Are the products/services performing to consumer expectations?

The research methodology for the post COVID-19 U&A studies will depend on such things as the target market profiles, product/service type and specific business objectives, but would typically comprise of both qualitative and quantitative research. As we have argued elsewhere, the most insightful research is where the statistics from quantitative research are given context and meaning by the “lived” experiences and perspectives of the consumers as gathered through qualitative techniques.

Research Design

Impact of data collection method on Net Promoter Scores (NPS)

June 21, 2019


Research has consistently shown that the method of data collection affects the responses given by the study participants. Among other factors, this is due to differences in the extent to which the participants feel under pressure to satisfice or to provide socially desirable responses (Holbrook et al, 2003).

We examined responses from a customer satisfaction study in which different data collection approaches were used to administer the same survey. The dual method had been adopted due to some sample accessibility challenges. The study had a sample of 1023 customers and of these 580 (57%) were interviewed using the computer aided telephonic method (CATI) and 443 (43%) were interviewed face to face.

One of the metrics in the study was the Net Promoter Score (NPS) in which customers who gave scores of 9 or 10 on their likelihood to recommend the service provider to others were categorized as Promoters. We found that 87% of the face to face respondents were in the Promoter category compared to 61% among the telephonic participants, and this was a statistically significant difference (p=0.01).

A follow up logistic regression model controlled for gender, age, race, and region. The results showed that face to face respondents were 4 times more likely to be Promoters than those interviewed over the phone (OR = 4.2, 95% CI=3.0-5.9). This raised reliability and validity issues. We concluded that the face to face respondents had probably felt under more pressure to be “nice” on the service provider and therefore recommended that responses obtained using this method be discarded altogether.

In any case, it’s better to assume happy customers are unhappy than to assume unhappy customers are happy.

Data Collection

 

Race differences in usage of body care products

May 18, 2019


Although a variety of socioeconomic factors influence the purchase and use of personal care products, research has also shown racial variability in the physiological properties of skin which, in turn, influence the choice of body care products by consumers of different races. We analysed some survey data on personal care products usage and found the following differences between black and white consumers in South Africa.

  • Black consumers were twice more likely than white consumers to use petroleum jelly (OR=2.4, 95% CI 1.4 – 4.1)
  • Black consumers were four times more likely than white consumers to use hair styling products (OR=3.7, 95% CI 2.1 – 6.4)
  • White consumers were 3 times more likely than black consumers to use body wash or shower gel (OR=2.9, 95% CI 1.7 – 5.1)
  • White consumers were 7 times more likely than black consumers to use antiperspirant or deodorant (OR=7.4, 95% CI 1.0 – 55.0)

There were no significant differences between black and white consumers in relation to usage of the body cream, body lotion and face care products.

Data Collection

Ranking responses from focus group discussions

January 27, 2019


In qualitative research, responses and viewpoints are identified and reported regardless of how often they were mentioned and irrespective of the point in the discussion or interview process they were mentioned. In other words, both spontaneous and probed responses are important. Although there is scientific merit in attaching certain interpretations to responses that are given first or spontaneously, it ought to be borne in mind that some important points or insights often only emerge after encouragement and probing by the researcher.

Another important point here is that whereas determining “order of mention” is quite feasible in unstructured qualitative interviews with just one respondent (i.e. one-to-one depth interviews), it would be a methodological flaw to try and do so in focus group discussions. First, and by instruction, focus group participants speak one at a time. Therefore “first mention” in a group discussion is really only “first mention” by the participant who happened to speak first. It does not follow that a different participant speaking first would have said the same things and/or in the same order. Secondly, participants in focus groups tend not to repeat points already covered by others. It can therefore be misleading trying to infer the importance of certain ideas or viewpoints based just on the number of times something was said in a focus group (e.g. through a basic word count). Indeed, once a point has been made and noted, moderators tend to probe for additional, or even opposing viewpoints, and not for confirmation of points already made.

In addition to the above within-group challenges, researchers often conduct the analysis across several focus groups and there is usually no uniformity across the groups in relation to what is mentioned first or what is mentioned the most. The analysis therefore focuses on identifying ideas that came up within and across the groups or ideas that came up in some groups but not in others, but without attaching too much importance on identifying the order and frequency of the opinions. That is the function of quantitative surveys.

There are several dynamics at play in focus groups that have biasing effects too. One such bias is dominant characters who tend to speak first and/or the most. There is thus a danger of ending up taking their individual opinions and projecting them as overall group opinions. That is why, where some form of ranking or quantification is required, some structured response forms are handed out for self-completion by the individual participants, or the facilitator asks for a show of hands and performs a count.

Ultimately, the real value of qualitative research lies in identifying and listing all the relevant factors in relation to the phenomenon being investigated. The importance and/or prevalence of those factors within the target audience can then be measured and reported more definitively through a follow-up quantitative survey.

Research Design

LinkedIn
Share