Source: Abby Chung/Pexels
*Note: this post has been corrected. Thank you to an insightful commentator!
I am an epidemiologist, which means I spend a lot of time producing—and consuming—health research. Epidemiology is a field that seeks to both address specific questions (e.g., why are women more likely to develop than men?) and develop a set of methods for conducting health research (e.g., how would I design a study to assess why women are more likely to develop depression than men?).
Kraken Kratom is proud to offer the best kratom powder, leaf, extracts and capsules available online. Our rigorous quality control checks esnure your getting the highest quality kratom.
Epidemiology provides a set of principles and tools for designing studies to answer research questions in a rigorous and robust way. This is the most important aspect of science: If a study is poorly designed, it really doesn’t matter what the findings are because they are not valid. Almost every health research study you’ve ever read about has been—or at least, should have been—informed by the principles of epidemiology.
Like many people, I find click-bait news headlines—like “Being a Pessimist is Bad for Your Health and Brain”—hard to resist, even though I know that when I look “under the hood” of the study the article is referring to, I will likely find little evidence to support these claims. This is because many of the conclusions drawn from health research, even those published in legitimate scientific journals, stem from studies that were poorly designed.
There is a range of quality (reproducibility) in health research published today. There are studies that are the equivalent of a Yugo and those that are the equivalent of a Honda—but unless you have car expertise (listening to NPR’s CarTalk doesn’t quite cut it), the average person has no way of knowing the difference a priori.
Rather than concluding that some scientific evidence is more reliable than others in the same way that we conclude that some vehicles are more reliable than others, some people dismiss the entire scientific enterprise. But I still drive to work—and if you are like three-fourths of Americans, you do too, even though some cars do have transmission problems.
I believe that the principles of epidemiology can help address this (legitimate) issue. Scientific knowledge builds incrementally, and even a well-designed study that has none of the problems I discuss in these posts simply provides evidence for (or against) a hypothesis—nothing more and nothing less. Consensus around scientific facts takes time, and human behavior is complex.
I hope that, through this blog, I will be able to help you think more like an epidemiologist in evaluating the quality of health research you come across—and thus better calibrate the amount of you should impart to those findings.
I will start off by addressing some questions that I ask myself when determining how much I (dis)believe research studies that come across my desk.
First up: Who was in the study, and how did they get there?
There are generally two ways that health researchers identify people to be in a study:
- They solicit individuals directly (e.g., through websites like this). These individuals are generally asked to be in the study because they have some relevant characteristic (such as a history of depression).
- They select a representative probability sample of people, often just living in the general community, using survey techniques (the National Health and Nutrition Examination Survey is a great example).
How people got into the study is important because the characteristics that make someone decide to be (or even be eligible to be) in a study may be correlated with whatever research question is being asked.
Let’s say, for example, a researcher wanted to study whether having depression was associated with owning a dog. And let’s say that the researcher wanted to ensure that the “cases” of depression in their study were “clinically significant,” so they decided that they would only recruit people who had been hospitalized for depression. But this is problematic because only two-thirds of U.S. adults (and only about 40 percent of adolescents) receive any treatment for their depression each year, and of those that do receive treatment almost all are managed solely with medications. So the definition of depression in this study—one that requires hospitalization—will select a far more severe, and likely different in other important ways, sample of “cases” than are typical (i.e., the recruited cases will be “non-representative” of depression cases overall).
If the way that the comparison sample (i.e., individuals that do not have depression) is recruited does not “match” or otherwise compensate for the fact that the depression cases are non-representative, this case definition will result in something epidemiologists call “selection .” Selection bias can create an association between an exposure and a health outcome where there is none; it can also mask a true association where there is one.
This face will love you no matter your blogging missteps.
Source: Quang Nguyen Vinh/Pexel
Let me illustrate this with a example. Let’s assume, for example, that the following three things were true:
- The likelihood that you are hospitalized for depression is positively correlated with whether you have private health insurance;
- The likelihood that you have private health insurance is positively correlated with your income; and
- The likelihood that you own a dog is positively correlated with your income.
This means that people who are hospitalized for depression likely have higher income than people who have depression but are not hospitalized. Since they have higher income, they are also more likely to own a dog. This means that the study may find that depression is associated with higher likelihood of dog ownership—but this is just an artifact of the way in which people with depression were recruited into the study! On the contrary, a recent study by the University of Michigan National Poll on Healthy Aging reported that pet owners had fewer depressive symptoms than non-owners.
Importantly, there are known ways to address selection bias. One of the ways epidemiologists do this is by using large, population-based surveys in which people are selected more or less at random (the second way people are recruited into studies described above). This is how the National Poll on Healthy Aging selected its participants, for example.
Needless to say, the issue of selection bias is broader than this outline. However, appreciating who is in a study and how they were recruited is an important first step to becoming a knowledgeable of health research. Hopefully, this will give you some tools to “look under the hood” of studies the next time a flashy headline comes across your newsfeed.