← The Peter Attia Drive

#269 - Good vs. bad science: how to read and understand scientific studies

Sep 4, 2023 1h 50m 56 insights
<p><a href="https://peterattiamd.com/how-to-read-and-understand-scientific-studies/?utm_source=podcast-feed&amp;utm_medium=referral&amp;utm_campaign=230904-pod-/howtoreadandunderstandscientificstudies&amp;utm_content=230904-pod-howtoreadandunderstandscientificstudies-podfeed"> View the Show Notes Page for This Episode</a></p> <p><a href="https://peterattiamd.com/subscribe/?utm_source=podcast-feed&amp;utm_medium=referral&amp;utm_campaign=230904-pod-howtoreadandunderstandscientificstudies&amp;utm_content=230904-pod-howtoreadandunderstandscientificstudies-podfeed"> Become a Member to Receive Exclusive Content</a></p> <p><a href="https://peterattiamd.com/newsletter/?utm_source=podcast-feed&amp;utm_medium=referral&amp;utm_campaign=230904-pod-howtoreadandunderstandscientificstudies&amp;utm_content=230904-pod-howtoreadandunderstandscientificstudies-podfeed"> Sign Up to Receive Peter's Weekly Newsletter</a></p> <p>This special episode is a rebroadcast of AMA #30, now made available to everyone, in which Peter and Bob Kaplan dive deep into all things related to studying studies to help one sift through the noise to find the signal. They define various types of studies, how a study progresses from idea to execution, and how to identify study strengths and limitations. They explain how clinical trials work, as well as biases and common pitfalls to watch out for. They dig into key factors that contribute to the rigor (or lack thereof) of an experiment, and they discuss how to measure effect size, differentiate relative risk from absolute risk, and what it really means when a study is statistically significant. Finally, Peter lays out his personal process when reading through scientific papers.</p> <p><strong>We discuss:</strong></p> <ul type="disc"> <li>The ever-changing landscape of scientific literature [2:30];</li> <li>The process for a study to progress from idea to design to execution [5:00];</li> <li>Various types of studies and how they differ [8:00];</li> <li>The different phases of clinical trials [19:45];</li> <li>Observational studies and the potential for bias [27:00];</li> <li>Experimental studies: randomization, blinding, and other factors that make or break a study [44:30];</li> <li>Power, p-values, and statistical significance [56:45];</li> <li>Measuring effect size: relative risk vs. absolute risk, hazard ratios, and "number needed to treat" [1:08:15];</li> <li>How to interpret confidence intervals [1:18:00];</li> <li>Why a study might be stopped before its completion [1:24:00];</li> <li>Why only a fraction of studies are ever published and how to combat publication bias [1:32:00];</li> <li>Frequency of training for Olympic weightlifting [1:22:15];</li> <li>How post-activation potentiation (and the opposite) can improve power training and speed training [1:24:30];</li> <li>The Strongman competition: more breadth of movement, strength, and stamina [1:32:00];</li> <li>Why certain journals are more respected than others [1:41:00];</li> <li>Peter's process when reading a scientific paper [1:44:15]; and</li> <li>More.</li> </ul> <p>Connect With Peter on <a href="https://twitter.com/PeterAttiaMD">Twitter</a>, <a href="https://www.instagram.com/peterattiamd/">Instagram</a>, <a href="https://www.facebook.com/peterattiamd/">Facebook</a> and <a href="https://www.youtube.com/channel/UC8kGsMa0LygSX9nkBcBH1Sg">YouTube</a></p>
Actionable Insights

1. Develop Scientific Literacy

Emphasize scientific literacy to better understand research, distinguish signal from noise, and critically evaluate studies for rigor and potential misrepresentation.

2. Commit to Rigorous Paper Reading

If you choose to engage with science, commit to rigorously reading scientific papers, understanding it requires effort and attention to detail, but improves with practice.

3. Critically Evaluate Media Science Reports

Exercise caution and critical thinking when consuming science information from social media or news, as reporters may lack the necessary analytical skills to accurately interpret studies.

4. Distinguish Primary from Secondary Outcomes

Pay close attention to a study’s pre-registered primary and secondary outcomes, understanding that failing the primary outcome typically renders a study null, regardless of secondary findings.

5. Clinical vs. Statistical Significance

Always differentiate between statistical significance (a study’s success in rejecting the null hypothesis) and clinical significance (whether the observed effect size is practically relevant or meaningful).

6. Demand Absolute Risk Data

When evaluating study results, always demand to know the absolute risk alongside the relative risk, as relative risk alone can be misleading and insufficient for understanding true impact.

7. Utilize Number Needed to Treat (NNT)

Calculate the Number Needed to Treat (NNT) by dividing one by the absolute risk reduction to understand how many people must be treated to prevent one event, informing the practical value of an intervention.

8. Prioritize Low NNT Interventions

When evaluating interventions, prioritize those with a low Number Needed to Treat (NNT), generally below 100, as this indicates a more impactful and efficient treatment.

9. Scrutinize Meta-Analysis Components

Do not accept a meta-analysis as gospel without examining each of its constitutive studies, as a meta-analysis of poor-quality studies will yield poor results.

10. Evaluate Study Generalizability

Consider the size, duration, and patient population of a study to determine if its results are generalizable and relevant to your specific interests or patient context.

11. Check Funding & Conflicts of Interest

Always examine who funded a trial and the declared conflicts of interest of the authors, as these can subtly influence study design, reporting, or interpretation.

12. Recognize Publication Bias

Understand that publication bias exists, where studies with negative or null results are less likely to be published, potentially skewing the available scientific literature.

13. Value Negative Research Findings

Recognize that negative or null research findings are just as important as positive ones for the advancement of knowledge, as they prevent wasted effort and inform future research directions.

14. Support Study Pre-Registration

Advocate for and prioritize studies that are pre-registered on platforms like clinicaltrials.gov, as this practice makes it harder for investigators to withhold negative results and combats publication bias.

15. Use Registered Reports for Unbiased Publication

Consider publishing or seeking out “registered reports” where the study protocol is peer-reviewed and provisionally accepted before data collection, ensuring publication regardless of the outcome and combating bias.

16. Prioritize Peer-Reviewed Research

When seeking scientific information, prioritize peer-reviewed publications as they represent the highest standard of vetting by experts in the field, unlike non-peer-reviewed content.

17. Confirm Adequate Study Power

When a study, especially one with a null outcome, is presented, always question if it was adequately powered to detect a meaningful difference, as underpowered studies can miss real effects.

18. Understand P-Value as False Positive Rate

Understand that a p-value represents the probability of observing a result as extreme as, or more extreme than, the one observed, assuming the null hypothesis is true, essentially a false positive rate.

19. Understand Confidence Intervals as Uncertainty

View confidence intervals as “uncertainty intervals” that indicate the range where the true population statistic likely lies, rather than a strict probability of containing the true mean.

20. Check Confidence Interval for Unity

To quickly assess statistical significance, check if the confidence interval for a hazard or odds ratio crosses one; if it does, the result is not statistically significant.

21. Value Tighter Confidence Intervals

Recognize that tighter confidence intervals indicate less uncertainty and more precision in the estimated effect, increasing confidence in the study’s findings.

22. Interpret Hazard Ratios

Familiarize yourself with how to calculate and interpret hazard ratios (e.g., 0.82 means an 18% reduction, 2.2 means a 120% increase) to understand the temporal risk of events in clinical trials.

23. Beware Healthy User Bias

When evaluating observational studies, especially in health, be aware of the “healthy user bias” where people making one health-conscious choice often make many others, confounding results.

24. Distrust Food Frequency Questionnaires

Be highly skeptical of nutritional epidemiology studies that rely on food frequency questionnaires due to their inherent clunkiness, recall bias, and metaphysical impossibility of accurate recall.

25. Be Cautious with Causality

When interpreting observational studies, recognize that while patterns will be seen, establishing causality from these patterns is difficult and requires careful consideration.

26. Acknowledge Hawthorne Effect

Understand that observation itself can change behavior (the Hawthorne effect), meaning people may alter their actions simply because they know they are being watched or recorded.

27. Account for Confounding Variables

When interpreting studies, especially observational ones, identify potential confounding variables (e.g., age, sex, smoking) that can affect results and obscure true causal relationships.

28. Caution with Homogeneous Extrapolation

Be cautious when extrapolating results from studies conducted in homogeneous populations (e.g., men only) to broader, more heterogeneous populations (e.g., women), as utility may differ.

29. Understand Multi-Site Study Trade-offs

Recognize that while multi-site studies offer heterogeneity and generalizability, they are harder to control and can introduce bias if sites are not run consistently.

30. Analyze Adverse Events Thoroughly

When evaluating a trial, pay close attention to the frequency, severity, and distribution of adverse events in all groups, not just the primary outcomes.

31. Know Reasons for Early Study Stops

Be aware that clinical trials can be stopped prematurely for three main reasons: safety concerns, overwhelming benefit, or futility (no chance of finding a significant effect).

32. Understand Journal Impact Factor

Recognize that journal prestige is often indicated by its impact factor, a yearly metric reflecting the average number of citations per article published in that journal over a given period.

33. Read Abstract First

When reading a scientific paper, start with the abstract to quickly determine if the paper’s content is relevant and interesting enough to warrant further reading.

34. Tailor Reading to Familiarity

Adjust your reading approach based on your familiarity with the subject matter; read the introduction if unfamiliar, but skip it if you already have a good grasp of the background.

35. Scrutinize Methods Section

After the abstract (and introduction if needed), go directly to the methods section to understand the study’s design, randomization, interventions, subject numbers, and specific procedures.

36. Start Results with Figures/Legends

When reviewing the results, begin by examining the figures and tables along with their legends, as well-designed figures should be standalone and convey key findings concisely.

37. Read Discussion Last

Read the discussion section last, after forming your own opinions on the study’s strengths, weaknesses, and remaining questions, to compare your thoughts with the authors’ interpretations.

38. Practice Regular Paper Reading

Improve your ability to understand scientific literature through consistent repetition, such as reading a scientific paper every week.

39. Start with Null Hypothesis

When approaching scientific inquiry, begin by assuming no relationship between two phenomena (the null hypothesis) to frame your investigation cleanly.

40. Use Randomized Controlled Experiments

To elegantly test a hypothesis, design your experiment as a randomized controlled trial, and blind it if possible, to minimize bias.

41. Implement Double Blinding for Rigor

To enhance study rigor and minimize bias, implement double blinding where neither subjects nor investigators know who is receiving treatment or placebo; single blinding (subjects don’t know) is a minimum.

42. Ensure Rigorous Randomization

For experimental studies, prioritize rigorous randomization, as it is crucial for making sense of results and minimizing bias, even if non-randomized studies aren’t useless.

43. Mitigate Performance Bias in RCTs

In lifestyle-based randomized controlled trials, ensure both treatment and control groups receive the exact same amount of attention, coaching, and advice to eliminate performance bias.

44. Aim to Falsify Hypotheses

When designing experiments, adopt a rigorous approach aimed at falsifying your hypothesis, rather than solely seeking to confirm it, to ensure robust scientific inquiry.

45. Perform Power Analysis

Before conducting a study, determine the necessary number of subjects by performing a power analysis to ensure the experiment is adequately powered to detect a true difference.

46. Pre-Register Study Protocols

Before conducting a study, define primary and secondary outcomes, get the protocol approved, develop a statistical plan, and pre-register the study to enhance transparency and rigor.

47. Secure IRB Approval

For any study involving human or animal subjects, secure Institutional Review Board (IRB) approval to ensure the ethical conduct of the study.

48. Secure Research Funding

Ensure adequate funding is secured in parallel with study design and approval processes, as research requires financial resources.

49. Value Case Reports for Hypotheses

Understand that individual case reports, while not generalizable, serve as valuable hypothesis-generating observations that can kickstart larger trials or research careers.

50. Avoid Underpowered Studies

Do not conduct underpowered experiments, as they lack sufficient subjects to detect a real difference, often leading to null results and wasted effort.

51. Avoid Overpowered Studies

Be cautious of overpowered studies, which may enroll more subjects than necessary and detect statistically significant but clinically irrelevant effects.

52. Target 80-90% Study Power

In study design, aim for 80% to 90% power (meaning a 10-20% false negative rate) to ensure the study has a high probability of detecting a true effect if one exists.

53. Larger Effects Need Fewer Subjects

Recognize that studies designed to detect larger effect sizes (bigger differences between groups) require fewer subjects to achieve adequate statistical power.

54. Prioritize Figures in Scientific Writing

When writing a scientific paper, start by creating the figures, tables, and their legends first, as this helps clarify the core findings and structure the rest of the manuscript.

55. Support Ad-Free Content

If you value content provided without paid ads, consider becoming a member to support the creators, as their work is often made possible by members.

56. Deepen Knowledge with Premium Membership

To take your knowledge of health and wellness to the next level, consider a premium membership, which aims to provide members with much more value than the subscription price.