Tuesday, June 10, 2025

Does Pfizer COVID‑19 Vaccine Really Raise Mortality in Florida?

 

Why This Matters

In April 2025 Florida’s Department of Health posted a medRxiv preprint claiming adults who received the Pfizer‑BioNTech COVID‑19 vaccine had 38 % higher 12‑month all‑cause mortality than adults matched to receive Moderna. The first author is Joseph A. Ladapo, Florida’s Surgeon General and a tenured professor at the University of Florida - an unusual dual role that journalists and UF faculty have flagged for potential conflicts of interest and irregular research oversight.
Having a political appointment or prior views does not automatically invalidate anyone’s science, especially when conflicts are properly disclosed, as they were here. Still, such dual roles mean readers should examine the work with extra care. Let’s do that.

1 · What the Study Tried to Show

The authors extracted Florida immunization and death-certificate data for 9.16 million adults who completed a two-dose mRNA series between December 2020 and August 2021. They then created one-to-one "exact match" pairs between Pfizer and Moderna recipients based on seven characteristics: age band, sex, race, ethnicity, month of vaccination, vaccination-site type, and census tract. After matching, only 735,050 people in each group remained - just 16 percent of the total dataset. They then counted any death within 12 months of the second dose. Result: 1.0% of the Pfizer group died, versus 0.7% of the Moderna group - an odds ratio of 1.38. The authors used homicide and suicide deaths as a "negative control" and, finding no difference, claimed minimal residual bias.

At first glance, this setup looks standard - no major methodological red flags in terms of how the data is matched or how outcomes are defined. The authors clearly know how to conduct this type of analysis and structure a health data study. However, the implications of a claim that one vaccine brand is associated with significantly higher mortality are extremely serious. This is where the principle applies: extraordinary claims require extraordinary evidence. When the consequences of a finding could affect vaccine confidence or public health decisions at scale, even subtle limitations or untested assumptions need to be evaluated with care.

2 · Confounding, Matching, and What Was Left Out

When comparing vaccine outcomes, it’s essential to consider confounding factors - hidden differences between groups that might affect the outcome. In this case, important potential confounders include chronic illness, frailty, history of COVID infection, the timing and uptake of booster doses, and socioeconomic status differences that are not fully captured by census-tract averages. If one group had more people with chronic conditions or was vaccinated earlier when COVID risk was higher, their baseline risk of death would naturally be higher, regardless of which vaccine they received.

The authors addressed this by using exact matching on seven factors: age band, sex, race, ethnicity, month of vaccination, type of vaccination site, and census tract. This is a reasonable approach, and the list is not short. However, they did not include many medically relevant variables such as comorbidities, prior infections, disability status, or whether someone later received a booster. These omissions matter, especially when studying all-cause mortality, because they leave the door open for major hidden biases.

Moreover, the matching procedure led to the exclusion of 84 % of vaccinated individuals. This extreme data loss suggests that the original Pfizer and Moderna groups were not very similar - if they had been, most people could have been matched. Within that subset, propensity matching has limitations. It balances what’s measured, but not what’s unmeasured. If unmeasured differences (like health status) strongly affect mortality, then even perfectly matched groups can still produce misleading comparisons.

A practical analogy: imagine comparing student performance between public and private schools. You only keep the students who match exactly on age, parental income, and neighborhood. You discard 84 % of your data. The students who remain might be comparable on paper, but if you didn’t account for factors like parental education, special needs, or prior academic history, your results can still be skewed. The same holds here.

While the authors employed an accepted statistical tool, the way it was applied, combined with what it left out, raises genuine concerns about whether the groups were truly comparable in the ways that matter for mortality.

4 · Weak “Negative Controls”

The authors likely understood that their matching approach might not fully account for all relevant differences between the Pfizer and Moderna groups. To address this, they included a so-called "negative control" outcome - deaths from homicide and suicide - under the logic that these types of deaths should not be affected by vaccine brand. If the rates of homicide and suicide were similar across groups, the thinking goes, then the groups must have been sufficiently balanced.

But this control is weak, for two main reasons. First, these are rare events, in a middle-aged-plus cohort, violent deaths are infrequent, and random variation easily hides moderate imbalances. Second, and more importantly, the risk factors that drive all-cause mortality, like heart disease or immunosuppression, have almost nothing to do with whether someone is murdered or dies by suicide. Equal homicide rates, therefore, tell us very little about whether the two groups were truly similar in terms of underlying health.

A better negative control would have been short-term hospitalisations - events that occur more frequently and correlate more closely with general health status. As it stands, the chosen control offers limited reassurance that bias was adequately addressed.

5 · Ignoring Florida’s Roll‑Out Strategy

Florida’s “Seniors First” policy (December 2020) sent Pfizer to hospitals and nursing homes first, while Moderna scaled up in county clinics weeks later. This policy led to a significant practical difference: older, frailer adults were more likely to receive Pfizer during the deadliest phase of the pandemic, while relatively healthier individuals received Moderna later under different conditions.

This real-world vaccine deployment context fits precisely with the problem described earlier: if the original populations who received Pfizer and Moderna were very different - as suggested by the need to exclude 84% of people during matching - then statistical comparisons become vulnerable to bias. Propensity matching cannot fully correct for systematic differences in how vaccines were offered to different groups. When the rollout strategy and clinical context have already selected for different risk profiles, observed outcome differences may reflect those selection effects more than any intrinsic property of the vaccines themselves.

6 · No Unvaccinated Baseline

One further omission stands out: the complete exclusion of an unvaccinated comparison group. The authors, who clearly understand how to conduct complex observational research, made a deliberate choice not to include what would seem like the most obvious and relevant baseline - how mortality in vaccinated individuals compares to those who received no vaccine at all. That decision is striking. Including an unvaccinated group would have provided context for whether the mortality observed in either the Pfizer or Moderna cohorts was elevated or suppressed relative to background risk. Without it, readers are left with a relative comparison that may overemphasise internal differences without showing the net benefit, or lack thereof, compared to being unvaccinated. In studies where public messaging and policy may be influenced, that missing reference point matters.

7 · Key Points for Readers

Matching is useful, but losing most data is a warning sign. Unmeasured confounders can fully explain observed gaps. Negative controls must be relevant and common enough to be informative. Findings should fit the broader evidence landscape. Extraordinary claims need robust, transparent analysis.

Public‑health guidance should continue to rest on the convergence of multiple rigorous studies, not on a single un‑reviewed analysis that slices away most of the available data.

Sunday, May 25, 2025

Vaccines, fertility, and a cargo cult

I originally planned to write another response to the reaction of SMIS to this article, but I realised there's no point in arguing with a cult. Instead, I have decided to zoom out and take a broader look at what SMIS represents. This is not just about one very sad group. It's about anyone trying to sell you a simple explanation or a quick “debunk” of something that’s deeply complex.

Wolfgang Pauli, the physicist, once dismissed a confused paper with the words "Das ist nicht einmal falsch" – "That’s not even wrong." It couldn’t be disproven because it didn’t make enough sense to test in the first place. And that’s the trouble with trying to argue with a cult-like mindset – one that confuses surface rituals of science for the real thing. 

Richard Feynman described a concept called "cargo cult science." After World War II, islanders in the Pacific built bamboo replicas of airstrips, control towers and radios, hoping it would bring back the American planes filled with goods. They copied the form, but not the substance, and the planes obviously never returned.

SMIS isn't just building symbolic airstrips. They're also building their own airplanes. They publish plots about “collapsing fertility among vaccinated women,” draw naive curves from public datasets, and declare them as proof of something extraordinary, something that the others might be even trying to hide. But like the islanders, they lack aerodynamics (a testable hypothesis), physics (proper depth), and electronics (statistics).

A line on a graph is followed by sweeping claims about health risks, censorship, and betrayal by science. When someone points out missing context or alternative explanations, like economic instability, the response is that “we used real population data”, and thus, statistics are unnecessary. And when six editorial boards reject their manuscript before peer review, they don't reflect on their methods – they suspect conspiracy.

It’s not ignorance. Many people behind SMIS are educated. One works in an IVF clinic and calls herself an immunologist, another has a math degree, another leads a national lab for arboviruses, one is a former veterinarian and a pharma employee turned taxi driver, and another studied metallurgy but now claims expertise in virus origins.

What’s missing is not intellect but an awareness of their limits. Real science is slow, uncertain, and messy. SMIS replaces it with a simplified story crafted to go viral. Maybe it’s frustration from unfulfilled academic careers, maybe it’s the comfort of thinking the world is secretly simple and controlled. I don’t know. But the planes they’re waiting for still haven’t landed, ... and they never will.

Saturday, May 17, 2025

Study About Nucleic Acid Content in COVID Vaccines: Methodological Gaps and Public‑Health Risks

The paper is published in Journal of Angiology & Vascular Surgery, an outlet of Herald Scholarly Open Access, a publisher that appears on Beall’s list of potential predatory publishers and is not indexed in the usual scientific databases.

It was accepted 11 days after submission (24 April → 5 May 2025), an interval barely long enough for routine peer‑review.

The senior author, Richard M. Fleming, is a cardiologist‑turned‑activist who lost his medical licence after felony health‑care‑fraud convictions and is currently debarred from FDA‑regulated research. Lead Slovak co‑author Peter Kotlár holds a political appointment investigating his country’s COVID response. None of these biographies, or their obvious interests in vaccine controversy, appear in the conflict‑of‑interest statement.

Those red flags would make most editors skeptical, but they are secondary if the data themselves are solid. Below is where the work falters on its own technical terms.


Nobody knows how the vials were handled

The authors never document how the Moderna and Pfizer lots were stored or shipped before analysis. Were they kept continuously at –20 °C/–80 °C, parked in a domestic freezer, or bounced around at room temperature? Without that record, we cannot distinguish genuine manufacturing variability from nucleic‑acid degradation (or aggregation) that occurred after the vials left the factory. In plain language: nobody knows where these samples had been rolling around.

Their “28 % variability” cannot coexist with their own qPCR plots

Early in the Results the paper’s scatter plots show qPCR signals spanning two to three orders of magnitude - a 100 % to 1 000 % spread.
Yet the Conclusion claims “a 28 % difference in nucleic‑acid content between lots.” These two claims cannot both be right: variability is either about 28 % or it is 100‑1.000 %, but never both at once.

If the assay really produced ten‑fold swings, referring to them as “28 %” is a mathematical mismatch. Conversely, if genuine lot‑to‑lot variability were only 28 %, the ten‑fold points must be laboratory artefacts.

The extraction protocol was probably saturated

Commercial mRNA vaccines are formulated at roughly 100 µg RNA mL⁻¹. The kit the authors used - the Qiagen AllPrep DNA/RNA Mini is rated by the manufacturer for a maximum binding capacity of 100 µg total nucleic acid per column and is considered linear only up to about 50 µg. A direct 0.5 mL load therefore feeds ~50 µg of RNA (plus any DNA) onto a membrane that is already at the top of its specification. Once a column sits near saturation, small differences in clogging or breakthrough liquid can masquerade as big lot‑to‑lot swings even when the true input is identical. The paper shows no spike‑recovery or dilution‑linearity test to confirm that the extraction remained quantitative.

RT‑qPCR reaction was likely saturated

The one‑step RT‑qPCR step was almost certainly overloaded, not merely close to the ceiling. Each extraction began with 500 µL of vaccine (≈100 µg RNA mL⁻¹, i.e. ~50 µg total), which the Qiagen column eluted in just 50 µL an effective ten‑fold concentrate of about 1 µg RNA per microlitre. The authors then pipetted 4 µL of this eluate into a 20 µL reverse‑transcription reaction, delivering roughly 4 µg RNA to an enzyme mix whose linear range usually tops out near 1 µg. Such template overload can inhibit reverse transcriptase and eventually underestimate RNA concentrations. 

No positive or negative controls

The study ran without the single most critical control in extraction‑based quantitation: a positive spike‑in of known copy number. Standard practice is to add a defined amount of synthetic RNA or plasmid to every vial before extraction; the recovered Cq then shows instantly whether the column, wash steps, or enzyme mix lost 2 % or 40 %. Given that the authors report variation spanning orders of magnitude, the absence of a spike‑in makes it impossible to tell whether the spread comes from the vaccine or from the workflow itself. A simple positive control would have flagged column saturation, pipetting loss, or RT inhibition and put hard bounds on true lot‑to‑lot variability.

The paper also omits routine negative controls, blank extractions and non‑template PCR wells, which would have exposed background DNA or primer‑dimer fluorescence. But the real show‑stopper is the missing spike‑in: without it, any large differences between vials could just be artefacts introduced during handling. Leaving out such a basic safeguard is highly unusual in quantitative molecular biology and undermines the credibility of the numerical claims.

qPCR cannot prove the plasmids are intact

Quantitative PCR amplifies short 100–150 bp stretches. It cannot tell whether those stretches were on a full 4 kb plasmid or on a broken fragment. Demonstrating intact plasmid requires DNase‑control digests, sizing gels, or sequencing - none of which the paper provides. 

Even the worst‑case number is within the legal limit

The highest DNA value reported (≈10⁹ copies mL⁻¹) converts to about 2 ng DNA per 0.5 mL dose. International guidelines for injectable biologics allow up to 10 ng residual double‑stranded DNA per dose. The regulatory limit applies to total residual DNA—intact or fragmented—so the measured 2 ng is still a comfortable five‑fold below that threshold.


Bottom line

The study combines missing chain‑of‑custody information, an extraction method likely beyond its capacity, an internal contradiction between 28 % and 1 000 % variability, and an unsupported claim about “intact plasmids.” Even the authors’ own worst‑case number falls well inside existing regulatory limits. Publishing such insufficiently‑vetted data in a venue with minimal peer‑review is irresponsible because it can be weaponised to erode public confidence, delay vaccination, and ultimately cost lives.

Zooming out, the broader evidence base shows that the very Moderna and Pfizer vaccines scrutinised here have been administered billions of times and remain among the most closely monitored medical products in history. Every commercial lot is tested for potency, purity, and residual DNA before release; pharmacovigilance systems track adverse events batch‑by‑batch. No pattern of DNA integration, mutagenesis, or vaccine‑linked cancer has been detected. On the contrary, rigorous clinical trials and multiple real‑world studies confirm high efficacy against severe COVID‑19 and a clear net benefit that has already saved millions of lives. Claims of hidden genetic danger therefore run counter not only to regulatory chemistry data but to the accumulated clinical outcome data as well.

Thursday, May 15, 2025

When Six Editors Directly Reject a Paper: Lessons from the Czech “Conception-Rate” Preprint

Last month, a preprint, partially authored by the Czech group SMIS, linked COVID-19 vaccination to decreased fertility in Czech women. Six journal editors read the submission and rejected it before peer review even began. SMIS and its followers interpreted this as censorship and an attempt to suppress inconvenient truths, but in reality, it reflected a rapid and justified response to fundamental scientific flaws. That speed isn't arrogance; it’s professional triage. Editors handle hundreds of manuscripts annually and are trained to spot basic methodological landmines instantly. Here are the ones they saw, landmines so elementary that every first-year research trainee learns to avoid them in year one of their education.

First, no inferential statistics. The authors compare frequencies using barplots, vaccinated versus unvaccinated, but never test whether the difference could appear by chance. A picture is not a p-value. Without it, the "effect" may be pure noise. Imagine flipping a coin ten times and getting seven eagles. Does that mean the coin is rigged? You might expect five eagles and five lions, but small deviations are normal. To decide whether the result is meaningful or just random variation, we need to apply basic statistics. You can't just eyeball a graph and declare a discovery.

Second, the ecological fallacy. Births were counted for thousands of women grouped into two giant buckets. Drawing conclusions about individual biology from bucket averages is a classic error that can even reverse the true direction of an association. A common example is the claim that countries with higher chocolate consumption win more Nobel prizes. While true at the population level, it says nothing about whether eating chocolate makes any individual more likely to win an award. Similarly, linking average birth rates to vaccination status without analyzing individuals leads to misleading inferences.

Third, exposure is guessed, not measured. The team "estimated" who was vaccinated before conception by subtracting doses given during pregnancy. Any misclassification here ripples straight into the result.

Now, these three problems do not automatically mean that the result of the study is wrong. Maybe there really is a statistically significant difference in birth rates between vaccinated and unvaccinated women. Maybe the misclassifications are minimal and don’t distort the picture. Perhaps the population is so homogenous that ecological fallacies and confounding don’t change the conclusion much. But these are all maybes. And in scientific publishing, these kinds of flaws are enough to trigger rejection before any of that can be tested.

And we haven’t even reached the most significant problem yet: causality.

The analysis does not adjust for several plausible confounders. Maternal age, socio-economic status (income and education), contraceptive use and whether the pregnancy was planned are all associated with both COVID-19 vaccination uptake and fertility outcomes. Such variables are classic confounders, third factors that can distort the apparent exposure-outcome relationship. Unless they are measured and controlled, any association we observe cannot be interpreted causally. Omitting them does not merely affect statistical precision; it introduces systematic bias that can make correlation masquerade as causation. The authors acknowledge the limitation of not having individual-level data, but acknowledgement alone cannot neutralise the bias. Without additional data, sensitivity analyses, or design features that break the confounding link, causal language exceeds what the evidence can support. 

Finally, strong causal language without a causal bridge. Speculation is fine; stating it as fact is not. Yet the language of the paper repeatedly leans toward one-sided conclusions, implying causal links that the data cannot support. For example, the authors mention in vitro studies where spike protein exposure may have affected ovarian cells. These findings, while interesting, are from highly artificial lab settings and have no direct bearing on birth rates in national populations. Suggesting otherwise is a leap that no trained scientist should make without substantial bridging evidence.

With those five problems lined up, editors did not need referees to conclude that the study’s conclusions outran its data.


What Robust Studies Find Instead

Below are four example investigations that follow the epidemiology rulebook. These studies either use individual-level data or rely on rigorous meta-analytic synthesis, adjust for confounders, and report statistical uncertainty. This is important because the authors behind the Czech preprint, especially on their social media platforms, have repeatedly claimed that no one is seriously examining the apparent drop in birth rates. That is simply not true. These studies do look into the issue, and they do so using methods that avoid the major pitfalls described above. Ignoring such robust evidence while insisting the topic is being neglected is misleading at best. Sound science means engaging with all the data, not only the fragments that fit a chosen narrative.

  • North-American couples cohort (Wesselink et al., 2022): Researchers followed 2,126 couples actively trying to conceive, logging vaccination status prospectively and analyzing time-to-pregnancy cycle by cycle. Multivariable models showed no difference in fecundability for vaccinated women or men; if anything, recent SARS-CoV-2 infection, not vaccination, briefly reduced male fertility.

  • Norwegian miscarriage registry (Magnus et al., 2021): Using national linked health records, investigators compared more than 18,000 first-trimester miscarriages with ongoing pregnancies. After adjusting for age and calendar time, vaccinated women were not at higher risk of miscarriage (odds ratio ~0.9). Large data, rigorous linkage, clear result.

  • Global meta-analysis of 40 studies (Fernández-García et al., 2024): This systematic review pooled >150,000 pregnancies. Vaccination reduced severe maternal COVID-19, had no adverse signal for conception, miscarriage or stillbirth, and slightly improved some neonatal outcomes. When dozens of datasets point the same way, the weight of evidence is hard to ignore.

  • Assisted-reproduction meta-analysis (Chamani et al., 2024): For people undergoing IVF, an ideal setting to scrutinize eggs, embryos, and implantation, researchers combined data from eleven studies. Ovarian response, embryo quality, and clinical pregnancy rates were identical in vaccinated and unvaccinated patients. That is about as close to a controlled fertility stress-test as one can get. 


Take-Home Message

Flashy graphs on social media are not proof of vaccine-related reproductive harm, such as reduced fertility, miscarriage, or disrupted menstrual cycles, especially when the analysis skips the first chapters of every epidemiology handbook. Yet this is exactly the kind of material that can mislead the general public. When presented with confident charts and scientific-sounding language, even educated laypeople can be fooled by those who are either reckless or intentionally deceptive.

In reality, when scientists collect individual-level data, measure exposure accurately, and adjust for obvious confounders, the alarming fertility signal vanishes. COVID-19 vaccines remain a safe, effective way to protect adults, including those planning a family, from the real risks of the virus itself. 

Monday, May 12, 2025

Click, Like, Subscribe and Share My Research! Now!

On platforms like Instagram and TikTok, what matters is not who you really are, but how well you present a version of yourself that others will like and share. People build online identities that are more polished than their real lives. These images and impressions become more important than the person behind them. We don’t just live with this kind of make-believe; we live inside it.

This idea is not new. Long before social media, the French philosopher Jean Baudrillard described something similar. He wrote about a world of “hyperreality,” where signs, images, and narratives no longer connect to real life. In his theory of simulation, Baudrillard explained how we move from showing reality, to distorting it, to eventually replacing it with polished versions that become more powerful than truth itself. He focused on media and advertising, but his thinking fits our current digital world perfectly.

Many believe science is immune to these trends. But increasingly, we treat science like something that must always be promoted, especially on LinkedIn. Every project or consortium gets its own sleek website. Every minor update is shared with upbeat hashtags and glowing language. Researchers are encouraged to be not only scientists, but also ambassadors and thought leaders, shaping how others see them. And yes, the author of this piece is also guilty of playing that game.

As a result, science is slowly becoming more about how it looks. It’s not just about doing meaningful work, it’s about showing that you're doing it in the right way, with the right optics. Metrics like citation counts, impact factors, and h-indices have become signals of quality, even if they say little about actual content. We learn to write papers for reviewers rather than for readers. Grant proposals are shaped to match current trends, using the right buzzwords and promises of social impact, rather than asking the most important or challenging questions. Often, the hottest topics are selected and amplified by well-meaning but trend-driven bureaucrats in funding agencies or charities. The result is that we stop observing the world around us and start performing for the systems that judge us.

This also affects how we present our findings. Research papers are often written as if the process was smooth and the story clear, even if the actual work was messy and uncertain. Over time, scientists figure out what editors and reviewers want, and it’s rarely slow, careful, incremental work. Instead, we aim for a clean narrative that fits expectations. When these stories reach the media, they’re often turned into dramatic headlines and inflated promises. Science looks less like a method for discovering truth and more like a polished product designed to impress.

If this continues, science could lose both its internal compass and the public's trust. Outside audiences may grow tired of the hype and constant contradictions. But inside the system, the damage could be worse. When everyone chases attention and funding, the kind of patient, risky, foundational work that drives real progress gets pushed aside. If we can’t tell the difference between looking successful and being useful, we risk losing what science is meant to be.

Baudrillard wasn’t writing about science, but his warning applies here too. The more we focus on appearances, the less we see what really matters. If we want science to stay honest and valuable, we need to push back against the pressure to constantly perform. We should reward truth-seeking, not storytelling. We should make space for slow, uncertain, and unglamorous work. If we don’t, we may end up trapped in a mirror maze where everything looks convincing, but nothing leads us forward. 

Tuesday, April 15, 2025

How to Be a Good Reviewer (Without Being a Jerk)

Peer review is one of the pillars of science, but let’s be honest, most of us have read reviews that range from clueless to petty to outright destructive. From sloppy reviews that give no real criticism, to overzealous brainstorming sessions listing every possible experiment under the sun, the spectrum of bad reviewing is wide. If you're going to spend your precious time reviewing a paper, here’s how to do it right and efficiently.

Focus on the Main Message
Yes, it's that simple. This is your main job. Avoid nitpicking. That means not giving suggestions that take a lot of work and make the paper 1% better, not obsessing over whether references are in perfect order, whether your own paper was cited, or whether the commas are in the right place. This is not the time to do formatting QA. Focus on what really matters. Avoid the “I prefer method A, this is method B, therefore it must be wrong” mentality. Reviewing is not about your personal preferences; it's about evaluating whether a study adds something meaningful to the field. Everything else is secondary.

So what should you focus on? Ask yourself just two questions:

  1. Is the main finding sufficiently supported by the data?

  2. Is the main finding important or conceptual enough to be interesting to the journal’s readership?

If the answer to both is yes, great - now it’s worth going into more detail. Dig into the methodology, stats, clarity, and figures. Be constructive. Suggest improvements, not punishments. And please, for the love of science (and to preserve the sanity of the poor PhD candidate who wrote the paper), do not ask for extra experiments just because you can. Only ask for what’s truly necessary to support the main point.

If the answer to either question is no, then say so. Clearly. Concisely. Respectfully. There’s no need to drown the authors in detailed technical feedback if the conceptual foundation isn’t there. Just point out the core issues and let it go.

That’s it. Reviewing doesn’t have to be painful for you or the authors. Review like you’d want to be reviewed. Simple, fair, and just enough.

Friday, April 4, 2025

Preprints: We Love Them! Or Do We?

Preprints are great open science tools that boost reach, speed up publishing, and promote transparency. Publish fast, celebrate open science, and achieve world peace! Right? Or is it more like...making a move just a bit too early, where things come out before they’re fully ready? It’s all fun and games until you realize you’ve released something half-baked into the wild.

Don’t get me wrong. Preprints can be amazing. They solve real issues when you need to get something out quickly, whether it's to establish priority, share critical findings fast, or just dodge the black hole of traditional peer review timelines. There are many practical reasons to embrace preprints.

But here’s what most open science preachers tend to ignore: Preprints also have their dark side. And if you’re not careful, you might end up regretting your enthusiasm for ‘getting it out there’ too soon.

Multiple Versions of Your Paper Will Circulate the Internet Forever

You might think your preprint is the final version, but the reality is that what you put out there is often just a snapshot of a work in progress. Reviewers, editors, and even new co-authors can change your mind. Suddenly, what was once significant becomes non-significant after factoring in new confounders. Whole sections get removed or added, shifting the entire narrative.

It creates a mess. There's a reason we have peer review. Sometimes, an outside perspective, someone who hasn't been fully invested in your storyline, makes a relevant point that changes the story.

Think about your own published papers. How many of those would you be comfortable with having their first version permanently floating around in cyberspace? How often did the final published version differ significantly from that initial submission? 

Scooping: It's Not Just Paranoia

Yes, formally, you can't be scooped; it's out there, right? But the reality is more complicated. Your competitor might suddenly know they have to speed up their own paper the moment they see your preprint. Just think about how many times you've changed your own strategy or rushed to publish something because you spotted a similar preprint floating around. The same thing can happen to you.

When someone sees your idea publicly available, they can use it as a roadmap. Maybe they have better funding, a bigger team, or just more time to push a similar story faster. Or worse, they already have raw data with more samples, more subjects, or fancier techniques. They just hadn’t realized the angle to take with their analysis until your preprint handed them the roadmap. And some of those big, well-funded groups can move frighteningly fast once they know the direction. They know exactly what you’re working on, and they might decide to pivot, refine, or outright hijack your concept. 

Citation Chaos

So, your preprint lives out there for a year before the formal publication finally drops. What do you think people keep citing during that time or even after the main story is officially out in a peer-reviewed journal? The preprint!

Yes, officially, everyone should cite the peer-reviewed version once it's published. But let’s be honest do you always update your citation manager when citing others' work? How often do you accidentally keep the preprint version because it was the first one you saved? And how many readers, in a hurry, just grab the first link they find without checking for a polished, published version?

This creates citation chaos. Citations get split between the preprint and the published paper, diluting the impact of both. It’s not just about your h-index although, yes, it does matter. It’s about making sure your work is read, understood, and cited in its best form.

Conclusion: Preprints Can Be Fine, Just Not Always

Preprints can be powerful tools for sharing research quickly and broadly. They can help establish priority, gather feedback, and enhance accessibility. And yes, I have many papers as preprints. Most of them went fine. But those one or two... I could have waited.

But here’s the thing: preprints are not automatically the best option for every story. Consider the benefits, but also weigh the risks. Think about how realistic these pitfalls are for your particular case.

If you have a solid reason to preprint, great, go for it. Just don’t make it your default choice for every single piece of work. Often, the lazy way is the smarter way.

Does Pfizer COVID‑19 Vaccine Really Raise Mortality in Florida?

  Why This Matters In April 2025 Florida’s Department of Health posted a medRxiv preprint claiming adults who received the Pfizer‑BioNTech...