Survey Savvy: Strengths and Shortcomings
With election polling currently at a fever pitch, we’ve developed a three-part series designed to aid in understanding, interpreting, and utilizing survey data. This post is the first in this series.
Surveys are like the Swiss Army knives of research: versatile, handy, and useful in a wide variety of situations. But, just like you wouldn’t use a Swiss Army knife to fix a car engine, surveys have their limitations. They’re great at answering some questions, but not so great at others, and it’s important to recognize that not all surveys are the same. Community surveys, for instance, tend to focus on specific groups or local issues, providing a closer look at particular demographics, while public opinion polls aim to capture a broad snapshot of national or even global attitudes. Each type of survey has its strengths and limitations, and understanding these differences is crucial for interpreting their results. In this post, we’ll explore how surveys work, what they can and can’t tell us, and why a thoughtful approach to understanding them matters.
How we pick who gets to answer (and why it matters)
Selecting respondents for a survey is like building a sports team. In an ideal world, you’d use a draft system (random sampling), where every player (respondent) has an equal chance of being picked, ensuring a balanced team. But sometimes, you have to rely on scouting and recruiting (non-random sampling), targeting specific players based on availability or expertise. While both methods can create a strong team, scouting runs the risk of overlooking key talent and skewing the team’s performance. The same goes for surveys: both random and non-random sampling have their place, but each comes with trade-offs in accuracy and representation.
The most reliable way to get an accurate snapshot of a population is through random sampling. This method gives everyone in the population an equal chance of being selected—like pulling names out of a hat (if that hat could fit millions of people). It’s ideal for surveys where you want to generalize the results to a larger group. For example, if you want to know how Americans feel about climate change, random sampling helps ensure you’re getting a wide range of perspectives, not just those of a particular subset of people.
In random sampling, a diverse group of individuals is selected without a specific pattern, ensuring each person has an equal chance of being chosen. (Image generated using OpenAI’s ChatGPT)
However, random sampling isn’t always feasible. It can be expensive, time-consuming, and sometimes you don’t have a complete list of people to sample from. That’s where non-random methods come in. They’re quicker and cheaper but come with trade-offs in accuracy and representation.
One common non-random method is snowball sampling, where you start with a few respondents and ask them to recruit others, who then recruit more, and so on. This is especially useful when you’re trying to reach hard-to-access groups, like grassroots activists or specific and small racial or religious groups. The upside? You can get responses from people you might not reach through traditional methods. The downside? It often leads to a biased sample — where some members of a population are systematically more likely to be selected than others — because people tend to recruit others similar to themselves, limiting the diversity of perspectives.
There are also variations on these methods. For example, in stratified sampling, you divide the population into subgroups (like age or gender) and then randomly select from each group to make sure all key groups are represented. Similarly, cluster sampling involves selecting entire groups (or clusters, like neighborhoods or schools) and surveying everyone within those clusters, which helps when you’re dealing with geographically dispersed populations.
Are people being honest? (Or just saying what we want to hear)
Surveys are great, but here’s the tricky part: they rely on people telling the truth. A couple of common biases come into play here: social desirability bias and recall bias.
- Social desirability bias: In surveys as in real life, people tend to want to make themselves look good. This means that on sensitive topics (race, politics, how often they floss), respondents might not tell the whole truth. (This doesn’t mean they’re lying per se; they just might be unknowingly exaggerating their responses.)
- Recall bias: Then there’s memory. Ask someone what they had for breakfast three weeks ago, and you’ll probably get a blank stare. So when surveys ask people to recall past events or behaviors — like how often they attended protests or made charitable donations last year — don’t be surprised if the answers are a little fuzzy. People even misremember who they voted for in the last election on a pretty regular basis.
While we can’t ever ensure 100% honesty and accuracy in respondents’ answers, there are some tactics pollsters use to try to mitigate these biases:
- Anonymity and confidentiality: If you promise to keep respondents’ answers anonymous, they’re more likely to open up and tell you what they really think. Ask people about their true thoughts on controversial topics, like their stance on hot-button issues like immigration or climate change, and you’ll likely get more candid responses if they know they aren’t being judged by name.
- Careful question wording: The way you phrase a question can totally change the answer. Neutral, non-leading language is key. If you ask, “How much do you agree that pineapple on pizza is a culinary disaster?” you’re practically begging for people to agree with you.
What surveys can’t tell us
Surveys are incredibly useful, but they aren’t magic wands. There are key limitations to what survey data can reveal.
For one, what people report in a survey is not always the same as what’s actually happening in the real world. Surveys capture perceptions and experiences as people understand them. So, if a survey shows that 60% of respondents report facing discrimination at work, that tells us these people feel discriminated against — but it doesn’t prove that discrimination definitely occurred in every case. This distinction matters because sometimes people overinterpret survey results. Imagine a survey where people report feeling unsafe in their neighborhood. This doesn’t mean crime is necessarily rising — it could be about media coverage, rumors, or a few recent incidents that made people feel unsafe. The survey shows us how people experience their environment, not a direct crime rate. In other words, surveys give us the what, but they often can’t give us the why or the proof that a specific thing is happening as reported. They reveal patterns and perceptions, which are valuable, but they aren’t courtroom-level evidence.
Surveys also cannot establish causality. Just because two things appear to happen together doesn’t mean one caused the other. For instance, surveys might show that people who listen to classical music are more likely to own cats. This doesn’t mean Beethoven causes cat ownership; it could simply be that classical music lovers enjoy quiet environments, making them more likely to have feline companions.
Another important limitation of surveys is prediction. While surveys can provide a snapshot of current attitudes or behaviors, they cannot reliably predict future actions. For instance, just because a survey shows that 70% of people intend to vote in an upcoming election, it doesn’t guarantee they will actually turn out on Election Day. Human behavior is influenced by many factors that surveys can’t account for, such as unforeseen events, last-minute decisions, or changing circumstances.
Finally, surveys are limited in depth. While they offer quick snapshots of opinions, they can oversimplify complex issues, especially when questions are multiple-choice or yes/no. Understanding the full context of why someone feels or acts a certain way often requires deeper investigative methods like interviews or focus groups.
In sum, while surveys are a great tool for spotting trends and uncovering perceptions, they can’t tell us everything. They show us the what, but not always the why or how, and it’s crucial to recognize this distinction to avoid over-interpreting the data.
Wrapping it all up: Surveys are valuable, but not all-powerful
Surveys are powerful tools that can provide valuable insights into what people think, feel, and experience — if they’re done right. Random sampling ensures results are representative of the population, while non-random methods can help reach specific, hard-to-access groups. However, no matter how well-designed a survey is, it has limitations. Surveys rely on people’s honesty and memory, and they can’t provide answers to everything.
It’s important to remember that survey results reflect perceptions, not necessarily facts. They can’t establish causality, dig deep into complex issues, or reliably predict future behavior. So while surveys give us a useful snapshot of what’s happening now, they’re just one piece of a much larger puzzle. And in some cases, other research methods — like focus groups, in-depth interviews, or media analyses — might be better suited for exploring complex questions or understanding the broader context behind the numbers. By combining surveys with these other tools, you can gain a more comprehensive understanding of public sentiment and the issues that matter most.
Interpreting survey data with a clear understanding of its strengths and weaknesses will help ensure that you’re getting the most accurate and meaningful insights possible. In the next post in this series, we’ll dive deeper into public opinion polling, discussing the nuances of opinion polling data, the factors that shape results, and how to avoid common pitfalls when analyzing them. Following that, we’ll explore how survey data can be used to craft targeted messages, engage the media, and drive advocacy work. Stay tuned for more on leveraging survey data for greater impact!
For more information about survey methods and interpretation, feel free to reach out to ReThink Media’s Research & Analysis Team at analysis@rethinkmedia.org