How Paperstars Solves the Problems Found on ResearchGate

How Paperstars Solves the Problems Found on ResearchGate
Photo by Bernd Klutsch / Unsplash

In theory, academic platforms like ResearchGate should help researchers connect, share ideas, and evaluate each other's work thoughtfully. But a recent study — “Metrics Fraud on ResearchGate” by Kirilova and Zoepfl (2025) — highlights major problems with how metrics are handled on these platforms.

From inflated scores to a lack of quality control, it's clear the current system isn't working the way it should.

At Paperstars, we're building something better—deliberately designed to avoid the pitfalls exposed by studies like this one. Here's how:

🚩 Problem: Fake or Inflated Metrics

On ResearchGate, "Reads," "Recommendations," and Research Interest (RI) Scores can be easily manipulated through mutual boosting, bot accounts, or meaningless interactions.

How Paperstars is different:

✅ Every rating must be tied to a structured written review—not just a click.
✅ Academic verification adds accountability without sacrificing reviewer anonymity.
✅ Suspicious behaviour (like rapid boosting of specific accounts) is flagged for review.

🚩 Problem: No Quality Control on Recommendations

ResearchGate allows anyone to recommend any paper, often without evidence of careful reading or expertise. The result? Inflated scores with little real meaning.

How Paperstars is different:

✅ Reviews follow structured prompts (e.g., clarity of methods, strength of conclusions).
✅ No popularity contests: “likes” and superficial endorsements don’t affect ratings.
✅ Reviews are moderated to ensure relevance and fairness.

🚩 Problem: Gaming Metrics Through Mutual Boosting

Researchers have formed private groups on ResearchGate just to mass-recommend each other's work and inflate visibility.

How Paperstars is different:

✅ Journal Club groups exist—but focus on collaborative review and transparency, not boosting.
✅ Patterns of reciprocal reviewing are monitored and flagged to protect fairness.

🚩 Problem: Citation Counts and Interest Scores Prioritise Visibility Over Quality

On ResearchGate, highly cited or widely recommended papers aren't always the most rigorous. Citation counts and reads can outshine actual research merit.

How Paperstars is different:

✅ Papers are evaluated based on research quality, not visibility.
✅ Ratings reflect real engagement with methods, clarity, reproducibility, and openness.

🚩 Problem: Preprints and Peer-Reviewed Papers Treated the Same

On ResearchGate, preprints and formally peer-reviewed articles are mixed together without clear distinction—causing confusion about the status of research.

How Paperstars is different:

✅ Papers are clearly labelled as peer-reviewed, preprint, corrected, or retracted.
✅ Users know exactly what stage of publication they're engaging with.

🚩 Problem: Lack of Transparency

ResearchGate’s metrics calculations are opaque, and rule enforcement inconsistent.

How Paperstars is different:

✅ Our rating system is openly explained.
✅ Moderation policies are published and transparent.
✅ Abuse and misconduct are taken seriously—with clear escalation pathways.

🌟 Building a Better Evaluation System

Metrics should support good research—not distort it.

At Paperstars, we believe that:

  • Rigour matters more than popularity.
  • Open feedback beats hidden metrics.
  • Evaluation should be a conversation, not a contest.

We’re building a platform where quality, transparency, and trust come first—and we’re doing it with the research community, not on top of it.

📬 Want to be part of the change?

👉 Sign up for updates below
👉 Follow us on Bluesky
👉 Tell us what matters to you

Because science deserves better than clicks and counters.

It deserves stars—with meaning.