Showing posts with label Guide. Show all posts
Showing posts with label Guide. Show all posts

Tuesday, April 15, 2025

How to Be a Good Reviewer (Without Being a Jerk)

Peer review is one of the pillars of science, but let’s be honest, most of us have read reviews that range from clueless to petty to outright destructive. From sloppy reviews that give no real criticism, to overzealous brainstorming sessions listing every possible experiment under the sun, the spectrum of bad reviewing is wide. If you're going to spend your precious time reviewing a paper, here’s how to do it right and efficiently.

Focus on the Main Message
Yes, it's that simple. This is your main job. Avoid nitpicking. That means not giving suggestions that take a lot of work and make the paper 1% better, not obsessing over whether references are in perfect order, whether your own paper was cited, or whether the commas are in the right place. This is not the time to do formatting QA. Focus on what really matters. Avoid the “I prefer method A, this is method B, therefore it must be wrong” mentality. Reviewing is not about your personal preferences; it's about evaluating whether a study adds something meaningful to the field. Everything else is secondary.

So what should you focus on? Ask yourself just two questions:

  1. Is the main finding sufficiently supported by the data?

  2. Is the main finding important or conceptual enough to be interesting to the journal’s readership?

If the answer to both is yes, great - now it’s worth going into more detail. Dig into the methodology, stats, clarity, and figures. Be constructive. Suggest improvements, not punishments. And please, for the love of science (and to preserve the sanity of the poor PhD candidate who wrote the paper), do not ask for extra experiments just because you can. Only ask for what’s truly necessary to support the main point.

If the answer to either question is no, then say so. Clearly. Concisely. Respectfully. There’s no need to drown the authors in detailed technical feedback if the conceptual foundation isn’t there. Just point out the core issues and let it go.

That’s it. Reviewing doesn’t have to be painful for you or the authors. Review like you’d want to be reviewed. Simple, fair, and just enough.

Friday, April 4, 2025

Preprints: We Love Them! Or Do We?

Preprints are great open science tools that boost reach, speed up publishing, and promote transparency. Publish fast, celebrate open science, and achieve world peace! Right? Or is it more like...making a move just a bit too early, where things come out before they’re fully ready? It’s all fun and games until you realize you’ve released something half-baked into the wild.

Don’t get me wrong. Preprints can be amazing. They solve real issues when you need to get something out quickly, whether it's to establish priority, share critical findings fast, or just dodge the black hole of traditional peer review timelines. There are many practical reasons to embrace preprints.

But here’s what most open science preachers tend to ignore: Preprints also have their dark side. And if you’re not careful, you might end up regretting your enthusiasm for ‘getting it out there’ too soon.

Multiple Versions of Your Paper Will Circulate the Internet Forever

You might think your preprint is the final version, but the reality is that what you put out there is often just a snapshot of a work in progress. Reviewers, editors, and even new co-authors can change your mind. Suddenly, what was once significant becomes non-significant after factoring in new confounders. Whole sections get removed or added, shifting the entire narrative.

It creates a mess. There's a reason we have peer review. Sometimes, an outside perspective, someone who hasn't been fully invested in your storyline, makes a relevant point that changes the story.

Think about your own published papers. How many of those would you be comfortable with having their first version permanently floating around in cyberspace? How often did the final published version differ significantly from that initial submission? 

Scooping: It's Not Just Paranoia

Yes, formally, you can't be scooped; it's out there, right? But the reality is more complicated. Your competitor might suddenly know they have to speed up their own paper the moment they see your preprint. Just think about how many times you've changed your own strategy or rushed to publish something because you spotted a similar preprint floating around. The same thing can happen to you.

When someone sees your idea publicly available, they can use it as a roadmap. Maybe they have better funding, a bigger team, or just more time to push a similar story faster. Or worse, they already have raw data with more samples, more subjects, or fancier techniques. They just hadn’t realized the angle to take with their analysis until your preprint handed them the roadmap. And some of those big, well-funded groups can move frighteningly fast once they know the direction. They know exactly what you’re working on, and they might decide to pivot, refine, or outright hijack your concept. 

Citation Chaos

So, your preprint lives out there for a year before the formal publication finally drops. What do you think people keep citing during that time or even after the main story is officially out in a peer-reviewed journal? The preprint!

Yes, officially, everyone should cite the peer-reviewed version once it's published. But let’s be honest do you always update your citation manager when citing others' work? How often do you accidentally keep the preprint version because it was the first one you saved? And how many readers, in a hurry, just grab the first link they find without checking for a polished, published version?

This creates citation chaos. Citations get split between the preprint and the published paper, diluting the impact of both. It’s not just about your h-index although, yes, it does matter. It’s about making sure your work is read, understood, and cited in its best form.

Conclusion: Preprints Can Be Fine, Just Not Always

Preprints can be powerful tools for sharing research quickly and broadly. They can help establish priority, gather feedback, and enhance accessibility. And yes, I have many papers as preprints. Most of them went fine. But those one or two... I could have waited.

But here’s the thing: preprints are not automatically the best option for every story. Consider the benefits, but also weigh the risks. Think about how realistic these pitfalls are for your particular case.

If you have a solid reason to preprint, great, go for it. Just don’t make it your default choice for every single piece of work. Often, the lazy way is the smarter way.

Thursday, April 3, 2025

Two Simple Rules to Avoid Drowning in Bad Science

Let’s be honest: Scientific literature is a cornerstone of our work, but it's becoming increasingly tricky to navigate. Non-reproducible studies, paper mills, predatory journals, and dubious peer reviews are making it harder to separate genuine data from bad science.

I’ve been in this business for about 20 years, and I’m tired of wasting my time. This isn’t just preaching; this is a strategy I actually use. It works pretty well in the biomedical field (and I suspect it’ll work in any other research-intensive field).

Stick to High-Impact Journals. We all know that the impact factor isn't perfect, and ideally, we shouldn't obsess over it. Blah blah... But as a practical scientific reality, a paper published in Nature is far more likely to contain reproducible findings and undergo rigorous review compared to something published in the "World Journal of Whatever."

Favor Society-Endorsed Journals. Journals published or endorsed by respected societies usually have rigorous editorial standards, and they tend to attract more experienced reviewers. Stick to journals affiliated with established academic societies and credible editorial boards. This is especially useful for specialized stories that high-impact journals might ignore despite solid data. And yes, even if many journals from publishers starting with M, F, or B seem like trashcans, some are genuinely maintained by real scientists committed to quality science.

And the third rule is... No, wait, actually, this is it. Cheesy, I know, but honestly, 90% of the useful literature can be filtered effectively using just these two rules. As a quick, practical guide to avoid drowning in nonsense, this works surprisingly well.

So, What's Your Strategy?
I’d love to hear how others cut through the noise. Feel free to share your own filtering methods or rip apart mine.

Vaccines, fertility, and a cargo cult

I originally planned to write another response to the reaction of SMIS to this article , but I realised there's no point in arguing wit...