We often see attention grabbing headlines in the media touting a revolutionary new cure for an ailment, the latest fad diet or engaging marketing claims backed by “scientific evidence” pervading the health and fitness industry. But how do we know the claims stack up? Unfortunately, not all scientific studies carry equal claim, and these claims in many cases can be conflated.
Famously this was illustrated by the elaborate ‘chocolate hoax’ in 2014, in which a study was published suggesting chocolate could help you lose weight. This study made global headlines, but it only came to fruition later that though it was based on a real study, it was a carefully designed effort to expose unchecked poor science concocted by a collaborating journalist and scientist who wanted to expose fake unscrupulous studies.
Reading scientific studies is not simple – it is not easy for the lay person, doctor, or scientist. It is essential to consider the credibility of the author, clinical relevance in a real world setting as well as who the study is conducted and funded by (i.e., if there are any declarations of interest).
The summary of a study which comes at the start of a paper, is known as the abstract. However, most of the time it is important to take a deeper dive into the paper to view the detail. For health professionals and the lay person, this is often laborious and complicated. An abstract can be misleading because it may not necessarily address how it relates to previous research on the topic and context in terms of real-world use. Additionally, since it is meant to be a summary, the authors may intentionally omit the limitations of the experiment. If a study however has similar results replicated in a previous study, it is more likely to be of greater significance.
The Different Study Types:
- Evidence Summaries – These include ‘Meta-analyses’ and a ’Systematic Review’.
A Meta-analysis looks at all the available literature and brings all the data on a topic together. By bringing all the data together it can provide greater strength and statistical power to a research question. The disadvantage of performing a meta-analysis however is that it is time consuming and requires advanced statistical knowledge.
A Systematic Review presents an expert review of available evidence of a topic in question, which is useful in areas of limited research but has the disadvantage of comparing studies which are designed differently.
- Experimental Studies – These include ‘Randomised Control Trials (RCT)’ and ‘Nonrandomised Control trials.
As the name suggests RCTs randomise participants into either an intervention or control group whereas in a Nonrandomised trial they are assigned.
Randomised, double-blind, placebo-controlled trials, in which neither the participant nor the researcher are aware which group is receiving the treatment, are considered the gold standard for medical research. However, it is worth noting that these types of experimental studies are not always appropriate depending on what question the study is trying to answer.
- Observational Studies – The 2 Main Types of Observational Studies are ‘Case Control Studies’ and ‘Cohort Studies’.
A Cohort Study follows a group of participants over a period and in this way can track habits and risk factors. It is often easier to conduct than a RCT but has the disadvantage of often taking many years to conduct.
A Case Control Study compares histories of groups with and without a specific disease. It can be useful to identify risk factors. The disadvantage of these types of studies is they may not be reliable due to issues with recall by the participant.
Whilst a miracle cure being touted can make great headlines in the media, it is only a single piece of the puzzle. It is the reproducibility of results that suggests reliability. As an example, if there is only one study which suggests eating, let us say, blueberries daily decrease the levels of the stress hormone cortisol, then 100% of the data suggests blueberries decrease cortisol levels. However, if twenty studies looked at the effect of blueberry consumption on reduction of cortisol and only one single study showed that there was an effect, then 95% of data shows that there is no significant effect.
It is important to remember in the health and supplement industry, ‘cherry-picking’ of studies is rife. If a company wants to sell a supplement, they will mention the study that showed a positive effect of the supplement and ignore the many others that found there was no effect. To confuse matters researchers may also cherry-pick by quoting papers supporting their own conclusions on a topic.
A study may be long and information dense, but within it lies key bits of information which allows us to determine how reliable and (statistically) significant the study is. First and foremost, this includes the methodology and design of the trial – it is important to be aware of the participants age, sex, lifestyle, health status and to have information relating to how they were recruited into the trial. The larger the number of participants recruited within the study, the greater the scope for more statistically significant results. In addition, the demographic of participants recruited into the study is important to determine how applicable it is in the real world for individuals. As an example, if you are a South Asian woman, but the study only recruited White Caucasian males, the results are less likely to be of relevance to you. It is also worthwhile finding out if participants were excluded from the study and for what reason. This may give a better idea into the robustness of the study. Confounding factors should also be considered – this is essentially a variable that would influence the results. For example, if a study is being conducted on the effect of resistance training on muscle mass – you would not want some of the participants taking muscle building supplements but not others, as this would skew the results.
The statistical analysis aspect of a study is complex and requires a medical statistician to analyse all the data which has been collated. Following analysis, a conclusion relating to the main hypothesis will be determined. It is important to establish as to how the authors arrived at the conclusion. The discussion which follows should outline the value of the work, the comparison to previous work as well as talking about the strengths and weaknesses of the study.
Conflicts of interest are normally noted after the discussion. They can occur when those that design or conduct the research have a specific reason to find a certain result. This may be for financial gain. This is not always so clear cut though as each journal may have different criteria as to what is deemed a conflict of interest and certain scientific journals themselves may also have conflicts of interest.
Confusing, huh? This is why it is important to treat attention grabbing health claims in the media with scepticism. In fact, a survey which assessed the quality of evidence in UK National Newspapers found that between 69% and 72% of health claims were based on poor quality or insufficient evidence.
Going through research papers is time consuming and most health care professionals rely on scientific researchers to do the hard work for them. Nevertheless, with new ground-breaking research, if we do delve deeper, we can go through a basic checklist to ask ourselve how significant the science truly is (as below).
What question was the study trying to answer?
Does the paper clearly describe the design of the study?
- What is the type of study? What was the duration? What was the main result being measured in the study to determine if a treatment worked?
If the paper is a trial, is it reproducible with information provided in the paper?
- Was the trial randomised? Was it blinded? What treatment did each group receive?
What demographic was studied?
- How were the participants recruited to the study? How many participants were there? Is inclusion and exclusion criteria stated clearly?
What did the analysis show?
- Did the results illustrate a statistically significant difference?
- How many dropouts were in each group?
How relevant are the results to the real world?
- Based on demographics, which group do the results apply to?
- Is it realistic to give the intervention? e.g., is the dose realistic
Were there any adverse effects?
Were there any sources of potential bias?
- Any conflicts of interest?
- Was the intervention followed?
- Was the study pre-registered to prevent possibility of ‘date dredging’? This is when a data set is overanalysed to find some correlations that could solely occur by chance rather than representing a true causal relationship.