• On 12/06/2018
  • By

Survey research

I’ve done a lot of ranting about surveys lately on twitter. (If you’re interested at all, just get on twitter an search my handle @lmeloncon and surveys.) I’ve continued these rants in print in two forthcoming publications: one  about empirical research in the field and about doing programmatic research in the field.

But as it was pointed out to me on twitter (thanks Breanne) the rant needs to be followed up with suggestions on how to do things better. So here I’m gonna focus on a few things that the forthcoming publications didn’t totally cover

Problems with our current models:

  • There seems to be a disconnect about what a survey actually is and when you should use it.
  • We’re using surveys because they seem to be easy when in fact they are not being used as the method was intended and they are more often than not the best method to answer the research question
  • The design of most of them are poor and undermine any results that may be generated.
  • Poorly formed questions that fall under classic mistakes about survey questions (double barreled, biased, absolutes, leading, etc.)

I’m gonna start with some definitions first so that we all kind of know what I’m talking about. This is important because as the empirical piece I mentioned above clearly shows—clearly shows—there are a ton of assumptions in tech comm about what something is. And unfortunately, a lot of those assumptions are part of the problem with poor research study design, particularly around surveys.

So first and foremost, a survey is a quantitative instrument. That’s what they are designed for, to get a lot of information from a lot people. Though, in tech comm, they often end up being used as a qualitative instrument because it’s “easy” to send out to “a lot” of people. These things are in quotes because a good survey is not an easy method and often it’s used in tech comm because it is easy to deliver.

It’s a quantitative instrument because it is supposed to study the sampling of a particular population. Sampling is also something that we often don’t do well in tech comm. You need to really think through who it is you want information from, why those are the best people, and are they really representative of the larger population your sample is standing in for. You often need a minimum number of folks in your sample to be able to draw any conclusions from the data outside basic descriptive statistics. Descriptive stats is not a bad thing, but a survey is meant to gather quantitative data that can be analyzed from a number of perspectives.

Keep in mind that surveys are meant to (in general) describe certain aspects or characteristics of a population, reflect attitudes of respondents, and/or test concepts or hypothesis. In many ways, a survey is an exploratory instrument that needs to combined with other methods to actually get the data that you may need to answer your research question(s). Most of the surveys in our field are cross-sectional surveys that are used to gather information from a specific population at a period in a time.
Most of the surveys in our field (as seen in the published research) aren’t robust enough to do even simple internal measure to verify their reliability and often the sampling methods are so weak it further diminishes the reliability and validity and trustworthiness of the results.

For example, it’s common for folks to send out a survey to a list serv around a big topic. A recent example was a survey about keywords and concepts in tech comm. First, as the programmatic perspective piece identifies, the sampling to the list serv immediately wrong foots this survey. Second the actual survey itself was poorly formatted with leading questions to name but one of the problems with it. The better way to use the survey as a method (though I believe interviews would have been a better method for what the researchers were really trying to get at) would have been to send the survey to those folks who clearly self-identify as tech comm scholars and teachers. This could be done by generating a list of those people who have published in the field’s journals or who teach in programs. This way the sampling would be more reliable and the results would have a better chance of being valid. And yes, it would be more time consuming.

Note: I totally and completely understand the time pressures (both perceived and real) around research. I get that. I also totally can figure out why certain research was done the way it was done. But my point is that we need to hold ourselves to a higher standard if our research is to ever be really usable and something we can build on and also share with other fields.

A survey and a questionnaire is often used interchangeably, which is an incorrect move to make. In the early days of surveys, the questionnaire was the technical part of the survey that contained the questions. Questionnaires is also a term that can be used to describe interviews and other research approaches. So it’s a tricky word that should also be defined. But with the advent of electronic technologies that make the distribution of the survey so much easier, it becomes important for all researchers—and those in tech comm—to have a better understanding of terminology. And if you don’t agree with my general definitions here, then just be certain that when you’re writing up your findings that you include definitions of what you mean so that readers can then interpret and use your results.

Now, a questionnaire can be more qualitative (with a few quantitative things thrown in) and delivered electronically. This raises that weird gray space of what do we call the thing that is mostly or solely qualitative questions delivered electronically? We’ve called this thing an “asynchronous interview,” but someone critiqued it as simply a “questionnaire.” Yes and no. It is a questionnaire, which is also the terminology used for semi-structured interview questions. The reason we coined this new term is because we needed a term that provided more description for what it was we were actually doing. We actually followed up with many of the respondents in ways that mirrored an interview. The problem is then what happens when this method is replicated without the original care and exigence?? That’s a question for another day and hopefully, researchers will actually read why we did what we did. I can assure to for that project it was no quicker doing it the way we did it compared to traditional interviews.

I digress. A big part of my problems with surveys is that when I click through them, most of the time it seems that the question being asked would be better suited to a different method, and most often that method is qualitative.

Yes, a survey can be a mixed-method instrument that includes qualitative questions, but I stand by my assertion that they are meant to be quantitative instruments. They need to have numbers so that sampling error can be minimized and that both internal and external validity can occur.

For me, a good survey is quantitative, replicable, systematic, representative, and impartial/objective (as any can be). This means it takes time to do one well.

If you want to do a survey because you want or need quantitative data for your question(s), then you need to do them with more rigor in design and testing. Here are two books that are useful in helping you understand survey design: International Handbook of Survey Methodology, edited by de Leeuw, Hox, & Dillmanand Survey Research Methods by Fowler. And don’t be afraid to head over to psychology or education to take a methods course on surveys.

  • Determine if the survey is indeed the best method for your question: step away from the idea that you want to use it because you feel you get more information from more people. The response rate on surveys is around 25% or less. That’s not good odds for a problematic method. (Problematic here because of the issues pointed out above.) Instead, really consider all the possibilities of whether a survey is really the appropriate method for your questions and what you’re trying to accomplish.
  • Test and re-test your questions: in tech comm we’re pretty familiar with usability testing and we need to use those skills to test our questions and the instrument in multiple ways and adjust accordingly. This preliminary work is crucial for a good survey, but it’s all time consuming. So that idea that a survey was “easy” is just hogwash. A good survey is time consuming and laborious. But yeah, a survey you craft in an afternoon, send to one of your friends and then launch the next week, well, yeah, that is easy.
  • Think through your sampling method: you have to know and be able to explain why you chose the approach to get results the way that you did. That’s part of your methodological stance. It’s also a crucial part of the practice of research that helps readers understand and believe your data. Tech comm should be critically moving past convenience samples because in large part the convenience of the sample is actually undermining your results. In a project that will come out later this year or early in 2019, our sampling method took over 200 hours. Yes. And that’s when I stopped keeping track of the time. But the sampling method for that projects was a key indicator of whether we could claim the results were generalizable.
  • Test and re-test your questions: worth saying twice because most of the questions on surveys I’ve seen over the last couple of years are biased and leading, which undermines completely the actual data collected.
  • Make a clear research plan: this is a crucial part of the research study design and that’s how and when you’re going to launch the survey, how often to follow-up, and what that looks like. Also am understanding of return rates and when you think you may have enough data. In some cases, the IRB may have input on these things. For example, in our survey of contingent faculty, we were only allowed to follow up three times.
  • Understand the survey and it’s results are rhetorical: This is not an eye opening statement, but it’s one that bears repeating. While social scientists love the survey because of it’s perceived objectivity, those of us in tech comm understand the rhetorical nature of them. This is important in all five of the previous design considerations. The ultimate decisions behind choosing the survey and the process of using a survey needs to be considered both rhetorically and ethically. This is done in the description of the research methodology.

Keep in mind that I’ve done surveys. I’ve done them well and I’ve done them poorly. But I also knew that I had to learn how to do them better, which I did. And now it would be helpful to the field, if we all tried to do a better job with surveys to ensure that the data we’re gathering is actually good data and can be useful to the field’s knowledge building and work.

Print Friendly, PDF & Email