Philippe Laval

March 17, 2025

AI Disruption: When Scientific Peer Review and Taxi Apps Converge

It’s far from a secret that artificial intelligence is reshaping traditional processes. But what happens when AI sets its sights on one of the bedrocks of academic rigor—peer review? The evolution underway in scientific publishing draws striking parallels with an earlier moment in tech history: when ride-hailing apps like Uber upended the taxi world.
Why compare science publishing to hailing a cab? Because, in both cases, we see fresh business models (and technological breakthroughs) jostling for space against entrenched players. One system relies on a slow, methodical approach (peer review by established scholars), while the other had taxi medallions, licensing boards, and unionized drivers. Just as Uber’s approach redefined how we move around the city, AI can potentially reconfigure the pipeline of how academic knowledge is vetted and disseminated.
Yet is this “AI moment” truly poised to spark disruption on the same scale as Uber’s? If you recall from “House of the Dragon,” power vacuums are never simply filled—they’re seized amid resistance, drama, and the occasional dragon’s roar. AI’s entry into scientific review might be an equally dramatic shift, but is it far enough along to earn itself a seat on the Iron Throne of academic credibility? Let’s delve into the clash between established processes and disruptive AI technologies, exploring what it means for scientists, investors, and curious onlookers alike.

The Rise of an AI-Driven Review Process
Recently, TechCrunch reported on a startup called Sakana claiming that its AI-generated paper passed peer review. According to the article, “Sakana said its AI generated the first peer-reviewed scientific publication. But while the claim isn’t untrue, there are caveats to note.” While it sounds momentous at first reading, the nuance is that the “reviewers” weren’t exactly following the same procedures we think of with, say, Nature or Science. The paper did pass a review—just not the kind that would match the rigor demanded by top journals.
Controversies like these foreshadow a coming debate: how will the scientific community adapt its current, sometimes painfully slow peer review process to accommodate AI-assisted work—or even AI-written work? Just as ride-hailing apps didn’t need the blessings of every taxi medallion-holder before launching, AI entrants may attempt to bypass longstanding journal gatekeepers. But can they truly produce rigorous content that stands up to scrutiny?

Why Peer Review Matters (and It’s Under Pressure)
Peer review, in essence, is designed to validate research and filter out errors, manipulations, or half-formed ideas. Academics spend weeks, if not months, meticulously cross-examining data, fleshing out context, and ensuring conclusions hold water. But it’s slow, reliant on volunteers, and prone to its own biases. AI tries to address pain points by accelerating mundane checks—like verifying citations or confirming that references truly match the article’s content.
In a best-case scenario, generative AI might free up reviewers to focus on big-picture logic, while the machine handles more mechanical tasks. The question emerges: could an AI also be the source of those big-picture arguments, effectively “writing” new knowledge? If so, how do we verify authenticity and depth?

The Uber Analogy: Software Ate the Taxi Industry…
If we rewind a decade or so, Uber introduced the free-floating concept of “push a button, get a ride.” It was a radical departure from the local dispatch systems controlling cabs. Suddenly, taxi medallions in New York or regulated black cabs in London were no longer the only game in town. The disruption had profound reverberations: protests erupted, regulatory battles simmered, and new consumer expectations emerged. Most of all, an entire section of the transportation market changed overnight.
As tech analyst Benedict Evans writes in his post “What kind of disruption?,” “Software ate the world. Uber and Airbnb didn’t sell software—they disrupted and redefined markets.” Like a newly awakened dragon, the ride-hailing platform soared over many of the constraints that had pinned traditional taxi companies to one place.

… Can AI Eat Scientific Publishing?
The real question is whether an AI-based alternative to peer review is prepared to do the same. AI stands at the frontier of data synthesis, pattern recognition, and text generation. On the surface, it offers the potential to handle a massive volume of manuscripts—teasing out questionable data or sniffing out duplicative texts in record time. This is especially tempting in fields like biotechnology or physics, where the volume of weekly submissions can overwhelm conventional editorial resources.
Consider this hypothetical: Instead of awaiting four to six (or more) months for feedback, an AI-based reviewing engine could parse your article in hours, compare your dataset with millions of others, highlight anomalies, and measure the novelty of your findings. It could also present a synthesized background on your topic—something akin to what generative AI tools already do with marketing copy. If you’re pressed for time, that’s a pretty compelling pitch.
Still, the analogy to Uber hits a snag: AI’s review “market” isn’t a direct, unregulated consumer service. Scientists, universities, and journals have long-standing norms about what counts as valid research. Taxis were never revered for academic rigor, but journals are. So if the AI system claims, “Your stats do not match previously established findings,” it must ensure near-flawless reliability. A glaring error could tarnish not just the reputational currency of the journal but potentially misdirect an entire field of study.

Lessons Learned from the Uber Playbook

1. Overcoming Regulatory and Cultural Resistance
Uber found itself entangled with local taxi commissions and city councils worldwide. AI-driven peer review faces a parallel quest: to prove to the guardians of academic credibility (like top-tier journals, scholarly societies, and the National Science Foundation) that it won’t degrade the quality of published work.
In the ride-hailing world, the consumer’s direct experience was the real litmus test. If a user could get a safer, cheaper, more convenient trip, it was enough for them. In research, the “consumer” is effectively the broader scientific community, which demands trust and verifiability. As with House of the Dragon, where new claimants to the throne need enough allies to hold on to the seat, AI-based peer review needs to garner acceptance from established academics and institutions to become more than a side project.

2. Data Is King—or at Least the King's Regent
Part of what fueled Uber’s success was real-time data on everything from driver supply to rider demand. In scientific publishing, there’s a hunger for data that streamlines review decisions: plagiarism checks, portability of references, statistical validations, etc. AI can deliver these solutions—provided it’s fed high-quality, comprehensive datasets and is adept at fact-checking citations.
Yet, AI’s track record on delivering precise facts is still mixed. Recall the earlier reference from TechCrunch: “But while the claim isn’t untrue, there are caveats…” Each misstep in data verification can sow doubt. Much like Varys’s network of little birds, the more data AI has, and the more carefully it’s curated, the more potent it becomes. But unlike rumor networks in Game of Thrones, data verification must be near perfect.

3. Creating a User-Centric Experience
Uber thrived because people found it more comfortable to type a destination on their phone than wave at a street corner. Scientific peer review is not nearly as frictionless a market. Yes, speed matters, but quality is paramount. If AI can give researchers an interface that clarifies how it arrived at certain judgments—providing a breakdown of typical pitfalls, for instance—then resilience against future scrutiny grows stronger. Just as click-to-ride was the user experience game-changer, a system where authors can see why their paper was flagged for questionable methodology would cultivate acceptance.

Challenges Looming on the AI Horizon

1. Accuracy in a High-Stakes Arena
In marketing copy or a quick code snippet, an error is often not catastrophic. But a miscalculation in a published medical trial can lead to real-world consequences. As Benedict Evans puts it, "Software can disrupt and redefine,” but in fields with tight margins of error, “the bar for trust is significantly higher."
2. Ethical and Authorship Concerns
When do we reach the point that an “AI paper” no longer needs a human author? Already, tools like ChatGPT can draft coherent arguments. If the system performs robust analyses, who is truly behind the publication? A seasoned scientist or a black box of neural networks? The potential commodification of research complicates the entire concept of authorship and intellectual credit.

3. Institutional Inertia
Scientific publishing is run by an interwoven network of journals, societies, and academic reputations. Breaking into that ecosystem might require the equivalent of a multi-year regulatory standoff, or a wave of younger researchers who embrace AI as normal. Much like how some cities initially banned (or severely limited) Uber, some top-tier journals might hesitate to let AI “grade” the manuscripts that come across their desks—at least until track records demonstrate near-zero error rates.

A Page from My Playbook: Integrating AI in Spirit
At Jolt Capital, we’ve built our own AI companion tool called Ninja, which continuously tracks and analyzes data on over 5 million companies. While it’s not exactly reviewing manuscripts, Ninja’s job is to streamline deal flow, accelerate due diligence, and offer competitive intelligence insights. We use it to comb through large datasets—some of them messy or incomplete—and surface patterns that a human analyst might miss on first pass.
The principle is the same: AI can be a powerful ally, but it remains a component of a broader decision-making structure. Human oversight is still essential. Like a good team of advisors in any self-respecting fantasy epic, AI’s role is to present thorough knowledge, highlight anomalies, and occasionally raise red flags. But it doesn’t sign the final parchment or declare war. That’s where an experienced manager or partner steps in. This synergy of machine intelligence plus human judgment is exactly how we incorporate new deals into our pipeline or vet an investment’s risk profile.

Are We in for a True “Uber Moment”?
Some experts believe that scientific publishing—rife with long delays, occasional editorial politics, and a burdensome volume of submissions—could be revolutionized by an AI system that cuts through it all. Others see incremental improvements: AI might bolster plagiarism checks or data validity but won’t dethrone the conventional peer review process.
Ultimately, it could unfold as a slow metamorphosis rather than an overnight revolution. Ride-hailing’s success hinged on direct consumer adoption; academic credentials and reputations create a more complex barrier to entry. In an interview published by MIT Technology Review, one senior editor said, “It’s not about refusing AI’s help—it’s about guaranteeing the provenance and reliability of results.”

For me, the biggest takeaway is this: Disruption in a sector known for caution and rigor will require more than just brilliant algorithms. It will hinge on trust, institutional buy-in, and proven reliability. Yes, you might see pockets of radical experimentation—preprint servers quickly embedding AI-based commentary or smaller journals testing AI reviewers. But the mainstream scientific community will likely adopt an “AI plus human” approach rather than an all-out redefinition of peer review.

In that sense, the road to scale for AI in scientific publishing may look much more like an incremental journey than a swift coup. As with Game of Thrones, even if you have a massive dragon on your side, it still takes humans to strategize, form alliances, and rebuild structures once the dust settles.


About Philippe Laval

As CTO and Managing Partner at Jolt Capital, I lead a team of talented developers and engineers in building Ninja, our AI-powered platform that dynamically tracks and analyzes a database of 5 million companies—enhancing dealflow, due diligence, and competitive intelligence.

A serial entrepreneur with a passion for big data search and AI-driven analytics, I previously founded and led:
Evercontact, an automated service that keeps address books and CRMs up to date (exited in 2016).
Sinequa, a Gartner and Forrester leader in enterprise search (exited in 2024).

I split my time between San Francisco and Paris—mostly Paris now—and in August, you’ll likely find me kite-surfing in Houat. I’m also the author of Winter is NOT Coming, a book on Game of Thrones and management lessons.