Emphasize scientific literacy to better understand research, distinguish signal from noise, and critically evaluate studies for rigor and potential misrepresentation.
If you choose to engage with science, commit to rigorously reading scientific papers, understanding it requires effort and attention to detail, but improves with practice.
Exercise caution and critical thinking when consuming science information from social media or news, as reporters may lack the necessary analytical skills to accurately interpret studies.
Pay close attention to a study’s pre-registered primary and secondary outcomes, understanding that failing the primary outcome typically renders a study null, regardless of secondary findings.
Always differentiate between statistical significance (a study’s success in rejecting the null hypothesis) and clinical significance (whether the observed effect size is practically relevant or meaningful).
When evaluating study results, always demand to know the absolute risk alongside the relative risk, as relative risk alone can be misleading and insufficient for understanding true impact.
Calculate the Number Needed to Treat (NNT) by dividing one by the absolute risk reduction to understand how many people must be treated to prevent one event, informing the practical value of an intervention.
When evaluating interventions, prioritize those with a low Number Needed to Treat (NNT), generally below 100, as this indicates a more impactful and efficient treatment.
Do not accept a meta-analysis as gospel without examining each of its constitutive studies, as a meta-analysis of poor-quality studies will yield poor results.
Consider the size, duration, and patient population of a study to determine if its results are generalizable and relevant to your specific interests or patient context.
Always examine who funded a trial and the declared conflicts of interest of the authors, as these can subtly influence study design, reporting, or interpretation.
Understand that publication bias exists, where studies with negative or null results are less likely to be published, potentially skewing the available scientific literature.
Recognize that negative or null research findings are just as important as positive ones for the advancement of knowledge, as they prevent wasted effort and inform future research directions.
Advocate for and prioritize studies that are pre-registered on platforms like clinicaltrials.gov, as this practice makes it harder for investigators to withhold negative results and combats publication bias.
Consider publishing or seeking out “registered reports” where the study protocol is peer-reviewed and provisionally accepted before data collection, ensuring publication regardless of the outcome and combating bias.
When seeking scientific information, prioritize peer-reviewed publications as they represent the highest standard of vetting by experts in the field, unlike non-peer-reviewed content.
When a study, especially one with a null outcome, is presented, always question if it was adequately powered to detect a meaningful difference, as underpowered studies can miss real effects.
Understand that a p-value represents the probability of observing a result as extreme as, or more extreme than, the one observed, assuming the null hypothesis is true, essentially a false positive rate.
View confidence intervals as “uncertainty intervals” that indicate the range where the true population statistic likely lies, rather than a strict probability of containing the true mean.
To quickly assess statistical significance, check if the confidence interval for a hazard or odds ratio crosses one; if it does, the result is not statistically significant.
Recognize that tighter confidence intervals indicate less uncertainty and more precision in the estimated effect, increasing confidence in the study’s findings.
Familiarize yourself with how to calculate and interpret hazard ratios (e.g., 0.82 means an 18% reduction, 2.2 means a 120% increase) to understand the temporal risk of events in clinical trials.
When evaluating observational studies, especially in health, be aware of the “healthy user bias” where people making one health-conscious choice often make many others, confounding results.
Be highly skeptical of nutritional epidemiology studies that rely on food frequency questionnaires due to their inherent clunkiness, recall bias, and metaphysical impossibility of accurate recall.
When interpreting observational studies, recognize that while patterns will be seen, establishing causality from these patterns is difficult and requires careful consideration.
Understand that observation itself can change behavior (the Hawthorne effect), meaning people may alter their actions simply because they know they are being watched or recorded.
When interpreting studies, especially observational ones, identify potential confounding variables (e.g., age, sex, smoking) that can affect results and obscure true causal relationships.
Be cautious when extrapolating results from studies conducted in homogeneous populations (e.g., men only) to broader, more heterogeneous populations (e.g., women), as utility may differ.
Recognize that while multi-site studies offer heterogeneity and generalizability, they are harder to control and can introduce bias if sites are not run consistently.
When evaluating a trial, pay close attention to the frequency, severity, and distribution of adverse events in all groups, not just the primary outcomes.
Be aware that clinical trials can be stopped prematurely for three main reasons: safety concerns, overwhelming benefit, or futility (no chance of finding a significant effect).
Recognize that journal prestige is often indicated by its impact factor, a yearly metric reflecting the average number of citations per article published in that journal over a given period.
When reading a scientific paper, start with the abstract to quickly determine if the paper’s content is relevant and interesting enough to warrant further reading.
Adjust your reading approach based on your familiarity with the subject matter; read the introduction if unfamiliar, but skip it if you already have a good grasp of the background.
After the abstract (and introduction if needed), go directly to the methods section to understand the study’s design, randomization, interventions, subject numbers, and specific procedures.
When reviewing the results, begin by examining the figures and tables along with their legends, as well-designed figures should be standalone and convey key findings concisely.
Read the discussion section last, after forming your own opinions on the study’s strengths, weaknesses, and remaining questions, to compare your thoughts with the authors’ interpretations.
Improve your ability to understand scientific literature through consistent repetition, such as reading a scientific paper every week.
When approaching scientific inquiry, begin by assuming no relationship between two phenomena (the null hypothesis) to frame your investigation cleanly.
To elegantly test a hypothesis, design your experiment as a randomized controlled trial, and blind it if possible, to minimize bias.
To enhance study rigor and minimize bias, implement double blinding where neither subjects nor investigators know who is receiving treatment or placebo; single blinding (subjects don’t know) is a minimum.
For experimental studies, prioritize rigorous randomization, as it is crucial for making sense of results and minimizing bias, even if non-randomized studies aren’t useless.
In lifestyle-based randomized controlled trials, ensure both treatment and control groups receive the exact same amount of attention, coaching, and advice to eliminate performance bias.
When designing experiments, adopt a rigorous approach aimed at falsifying your hypothesis, rather than solely seeking to confirm it, to ensure robust scientific inquiry.
Before conducting a study, determine the necessary number of subjects by performing a power analysis to ensure the experiment is adequately powered to detect a true difference.
Before conducting a study, define primary and secondary outcomes, get the protocol approved, develop a statistical plan, and pre-register the study to enhance transparency and rigor.
For any study involving human or animal subjects, secure Institutional Review Board (IRB) approval to ensure the ethical conduct of the study.
Ensure adequate funding is secured in parallel with study design and approval processes, as research requires financial resources.
Understand that individual case reports, while not generalizable, serve as valuable hypothesis-generating observations that can kickstart larger trials or research careers.
Do not conduct underpowered experiments, as they lack sufficient subjects to detect a real difference, often leading to null results and wasted effort.
Be cautious of overpowered studies, which may enroll more subjects than necessary and detect statistically significant but clinically irrelevant effects.
In study design, aim for 80% to 90% power (meaning a 10-20% false negative rate) to ensure the study has a high probability of detecting a true effect if one exists.
Recognize that studies designed to detect larger effect sizes (bigger differences between groups) require fewer subjects to achieve adequate statistical power.
When writing a scientific paper, start by creating the figures, tables, and their legends first, as this helps clarify the core findings and structure the rest of the manuscript.
If you value content provided without paid ads, consider becoming a member to support the creators, as their work is often made possible by members.
To take your knowledge of health and wellness to the next level, consider a premium membership, which aims to provide members with much more value than the subscription price.