There’s a lot of talk in the survey industry these days about experience: customer experience, employee experience, product experience, brand experience – the list goes on. However, there’s one particular group of people whose experience seems to be conveniently overlooked. They are often a very important group people – they are the survey-takers.
I usually (and incorrectly) call survey-takers Participants, which probably stems from my background in psychology. The term has its roots in qualitative research where typically a greater contribution is required from people taking part in research. Interestingly, before adopting the term Participants in the 1990s, Psychology as a discipline had used the much harsher term Subjects to refer to people taking part in experimental/academic research. In fact, the most accurate term for a survey-taker is probably a Respondent – as the person is simply responding to questions.
Why do people take surveys in the first place?
Whatever you like to call the people responding to your surveys, perhaps one of the hardest things to understand is why the hell anyone would want to spend their time filling in a survey anyway? Obviously, a lot of survey-taking in market research is incentivised (apparently you can even have a career these days as a professional survey respondent). Many take part to provide help – be that themselves, the company they work for, or a product or brand they like. Some just like sharing their opinions and want to make their voice heard. However, the extent to which the traditional survey methodology can provide this is arguably limited. Making your voice heard by ticking some boxes, with the data then disappearing and being aggregated with thousands of other people and then presented to a tiny handful of people is, from my perspective, a pretty ineffective way of making your voice heard or sharing your opinion. If I wanted to make my voice heard about something, there are many channels open to me other than just waiting for a survey to appear.
Why is the participant experience important?
The experience of participants in surveys is import for a number of reasons. From the perspective of the market research industry as a whole, survey-takers are the fuel that drives the survey machine – and there is a real concern that in some areas of research, survey responses will eventually run dry. From an experience management perspective, most survey-takers are pretty savvy about surveys these days and can easily feel betrayed when they are given a bad experience. What many people forget is that the research itself is a touchpoint which should leave survey-takers with a positive impression. It’s also important because the more survey-takers can be engaged/active in the data collection process, the better the integrity of the data and the deeper the insights.
Does the survey industry really care about participants’ experiences? The answer is – not really. Most of the companies commissioning or conducting the survey couldn’t care less about participants’ experience of filling in their surveys, as long as there’s enough of them to avoid an embarrassing response rate. Participants are a commodity.
This is the paradox of the Experience Management: Why are companies who are so focused on managing experience so nonchalant about the experiences of the people that fill in their surveys? After all, in many instances, the survey-takers are important – they themselves are often the employees, the customers or the users. Why would companies want to conduct survey research that alienates people when it could be creating a positive impression?
What is a bad survey experience?
Before talking about what constitutes a bad survey experience, it’s useful to point out that it’s actually extremely difficult, if not impossible, for a traditional survey methodology survey to offer anything but a bad experience. This relates to the very nature of surveys as we know them. No matter how pretty the questionnaire or slick the user interface, the traditional survey is undeniably a one-way and isolatory experience for the respondent.
If you’re ok with that, then here’s a short list of other things about surveys that we all hate (by no means exhaustive):
- Long surveys: No one likes long surveys do they? So honestly, how good are the insights you’re getting? Do you really expect people to maintain focus for all that time and not get bored, distracted or speed through the questions.
- Using one response scale: Asking every single question on the same agree-disagree response scale – yawn! This is terrible survey design.
- Using grid questions: Multiple choice questions with response options displayed in a grid or in columns encourage speeding and a lack of attention. They are also not a good interface to use on mobiles.
- Repetitive questions: Asking a lot of questions about the same topic from slightly different angles
- Not asking any open-ended questions
- Frustrating questions – where questions are overly long or detailed
- Using jargon – questions that are laden with corporate waffle or jargon
- Irrelevant questions: Asking questions that do not apply or about which the respondent has no interest.
- Doing long screening surveys and then not qualifying (common is consumer panels/incentivised surveys)
Improving the participant experience?
So what can be done to improve the participant experience if you have to use a traditional survey methodology? Here’s some ideas:
- Think about surveys as an important touch point
- Keep it short
- Make survey design respondent-centric
- Keep it relevant and make sure to include at least some questions that will be interesting to respondents
- Be transparent
- Share the findings where appropriate
- Analyse and monitor respondent user statistics
How are survey methodologies evolving to improve the participant experience?
With the evolution of new survey technologies that are combining quantitative and qualitative input with AI, there is a hope that survey-takers will move from being simple respondents to active participants. After all, what defines a survey in the market research sense is very broad – read any dictionary definition and a survey is just asking people questions about themselves, their experiences and opinions. To me, this means that there is lots of scope to expand the scope of what it means to conduct a survey.
There are a couple of promising areas. The first are survey tools that involve some form of interaction or crowdsourcing between participants. This gives rise to what is known as Social Collective Intelligence or Swarm Intelligence – a form of insight that only emerges where interactions between people are being leveraged. What distinguishes this category of tools is that they are able to orchestrate the collection and analysis of human interaction. They are closed loop systems – they have one or more feedback loops between their input and their output. This means that participants help to shape the output through their interactions within the input provided by other participants. Not only does this provide a more interesting experience for participants, outputs can often be much more insightful than tools where there is no such interaction. (Examples here include Crowdoscope, Waggl, Remesh).
The second area is Conversational Surveys. Tools that combine conversational interfaces with AI (in this case Natural Language Generation) are beginning to take off – and they are improving fast. They seem to be particularly good for gathering experience data and as well as being able to capture both qualitative and quantitative input, they can reply intelligently to probe and uncover deeper insights. (Examples here include Rival Technologies, Wizu, Survey Sparrow, Surveybot or Acebot.ai)
It will be interesting to see how the survey industry evolves and fragments over the coming years. However this happens, let’s hope it really does turn respondents into participants.