An excerpt from my 2022 MozCon talk: Myths, Misconceptions & Mistakes (lessons learned from a decade in Digital PR)

Over the years, on various conference stages I’ve shared the stories of some of the most successful digital PR pieces I’ve been involved with. However, what I failed to tell people was that these pieces were outliers – most of my work was not so successful.

This happens an awful lot in our industry, and as a result people have a very skewed view of how digital PR pieces typically perform. This bothered me so much, I wrote this post in 2019.

A couple of years on, sadly, I don’t feel like much has changed.

Over the past 3 years, I’ve worked with lots of digital PR folks across a number of agencies and inhouse teams. This is still the question I’m most frequently asked:

My last piece generated no linked coverage at all…

Am I terrible at this?

My answer to this question is always:

Unequivocally no.

You are not terrible at this – this is something that everyone experiences; and it happens far more often than you think.

The trouble was, I only had data from the the digital PR pieces I’d personally been involved with; and whilst I felt that the data I had was both robust, and likely to be representative, I was keen to gather more.

As such, part of my plan for my talk at MozCon was to collate and share aggregated, anonymised data from a range of agencies and inhouse teams to help give people a more realistic benchmark of how well (or otherwise) digital PR pieces typically perform.

Notes on the data & warnings about comparisons

The data I’m sharing here is from over 2,000 digital PR pieces which were created by 11 agencies and inhouse teams.

You’ll notice there’s actually two datasets, one for asset-led digital PR pieces (i.e. pieces where there’s anything from a blog post to a fully interactive piece that the journalist could link to, live on the client’s site); and one for digital PR pieces without assets (i.e. where just a press release was created, there’s nothing live on the client’s site which relates to the piece). I did this because I suspected that the performance of asset-led pieces, and pieces without assets would perform very differently, and I wanted to ensure the data was as useful as possible for people.

These datasets are reasonably robust, however you should note that the results of your own work may or may not compare favourably to these results. There are likely to be good reasons for this, and so if you find your own results look wildly different to these I suspect it may be down to one of more of the following factors:

  • The number of digital PR pieces you’ve launched…
    • For example, if you’ve launched only a relatively small number of digital PR pieces, the performance of just one or two of those pieces could heavily skew your own percentage breakdown.
  • The language or niche you’re working in…
    • Whilst this dataset does include the results of digital PR pieces in various languages, the vast majority are pieces that were created in English. As such, if you create digital PR pieces in Finnish, your results may look quite different because there are far fewer FInnish-language media outlets, and so the total possible number of pieces of linked coverage you’re able to secure will be lower, just because of the market you’re working in. In a similar vein, if you’re creating digital PR pieces with the aim of securing coverage on a reasonably small number of niche media outlets, again your results might look quite different because the number of media outlets who are likely to cover your piece will be smaller.
  • The resources and/or budget you have available to devote to this activity…
    • Whilst it’s definitely not the case that more resource-heavy (or expensive) digital PR pieces generate more pieces of linked coverage; nevertheless, I suspect that in some instances, having relevant expertise at your disposal can make a difference. Also, I feel like resources/budgets are perhaps indicative of how seriously the business takes its digital PR efforts. For example, if all digital PR activity is handled by one human, who is also responsible for a multitude of other tasks, it wouldn’t surprise me to hear that they struggled to balance their workload, and this impacted their ability to generate linked coverage.
  • The speed at which you execute…
    • This one’s a bit nuanced, and honestly, I’m not 100% sure about the extent to which this might make a difference. However, it could conceivably be the case that if you execute a very small number of resource-heavy digital PR pieces each year it’s possible that your results may compare favourably to these benchmarks. Conversely, if your strategy is to execute at speed, and push out a high number of digital PR pieces each year, your results may compare less favourably.
  • Luck…
    • I talked about how we often gloss over, downplay, or even outright ignore the role of luck in the success of digital PR pieces in part one of my MozCon talk (the full deck is embedded towards the end of this post). Truth is, if you got lucky with a few pieces, (or if you didn’t get lucky!) that could skew how your results stack up too.

Please be mindful of how you interpret, and use this data

I do not want this data to be used as yet another stick to beat digital PR people with.

My intention was simply to provide more realistic benchmarks for the performance of Digital PR pieces. As I highlighted above, your own results may or may not compare favourably to these benchmarks, and if that’s the case, there’s likely to be good reasons for that.

Most importantly:

I don’t think anyone’s goal should be to have their results compare favourably to these benchmarks.

What’s important is the quantity of quality linked coverage your activity generates overall, and the real-world business benefits you see as a result. Generating those results is the important bit; out-performing these benchmarks is neither here nor there.

I say this because you could out-perform these benchmarks, but still fail to deliver any real-world business benefits at all.

Benchmarks for asset-led digital PR pieces

Asset-led digital PR pieces are those where there is something live on the client’s site that a journalist could link to. The asset itself may be anything from a simple blog post, to a fully-interactive page.

The data below is from 1,398 asset-led digital PR pieces:

Number of pieces of Linked Coverage% of Digital PR Pieces
0 pieces of linked coverage5%
1-9 pieces of linked coverage35%
10-29 pieces of linked coverage31%
30-49 pieces of linked coverage11%
50-99 pieces of linked coverage10%
100+ pieces of linked coverage8%

Benchmarks for digital PR pieces without assets

For these types of pieces only a press release is created; there is no asset live on the client’s site for the journalist to link to.

The data below is from 730 digital PR pieces without assets:

Number of pieces of Linked Coverage% of Digital PR Pieces
0 pieces of linked coverage31%
1-9 pieces of linked coverage49%
10-29 pieces of linked coverage13%
30-49 pieces of linked coverage5%
50-99 pieces of linked coverage1%
100+ pieces of linked coverage1%

Here’s the comparison I don’t think we should be making but inevitably will:

Number of pieces of Linked Coverage% of Asset-Led Digital PR Pieces% of Digital PR Pieces without Assets
0 pieces of linked coverage5%31%
1-9 pieces of linked coverage35%49%
10-29 pieces of linked coverage31%13%
30-49 pieces of linked coverage11%5%
50-99 pieces of linked coverage10%1%
100+ pieces of linked coverage8%1%

Just for the record: I don’t think this means asset-led pieces are “better” or “worse” than pieces without assets, I just think they perform differently.

Pieces without assets seem to “fail” at a higher rate:

5% of asset-led pieces generated zero pieces of linked coverage


31 % of pieces without assets generated zero pieces of linked coverage

& they are seemingly less likely to generate 100+ pieces of linked coverage:

8% of asset-led pieces generated 100+ pieces of linked coverage


1% of pieces without assets generated 100+ pieces of linked coverage

But pieces without assets are also typically quicker (& therefore cheaper) to produce, and, as such, I don’t think this is a sensible comparison to make. I think both have their place, so if at all possible, I would recommend launching both asset-led pieces & pieces without assets.

Final Thoughts

I said that our barometer for the actual success-rate of Digital PR is broken – if we’re launching things with the expectation that they’ll achieve 100+ pieces of linked coverage, then we’re going to be disappointed a lot of the time.

But that’s not the only direction in which our expectations are unrealistic: in most of the organisations I’ve worked with, something like this has been the goal:

Every digital PR piece should generate a minimum of

10 pieces of linked coverage

Possibly you think this sounds reasonable… But I’m here to tell you it really isn’t. I think that the notion that this goal is achievable, is perhaps the most pernicious misconception of all.

Just as a reminder:

According to the data I gathered,

only 60% of asset-led digital PR pieces,

& just 20% of pieces without assets

generate 10+ pieces of linked coverage

Our goal is 100%

Effectively, our goal is never to fail, which is utterly unrealistic.

Why are we subjecting ourselves these impossible standards?

I don’t feel like other disciplines do. For example, technical SEOs don’t expect that every change they implement will deliver a measurable impact.

I asked a friend, who is a very experienced technical SEO this question: “What proportion of the technical SEO changes you’ve made delivered an impact?”

Here’s how they responded:

About 60% of what I implement now delivers a measurable impact

About 30% of what I implement is future-proofing (I don’t expect it to deliver a measurable impact, I’m doing it to reduce the risk of losing visibility in the future)

About 10% of what I implement, I expect will work, but actually fails to deliver a measurable impact

This person also asked me to highlight:

This is what I’m experiencing now,
earlier in my career probably only 40% of the changes I implemented delivered a measurable impact

Another of my friends said:

It depends on the types of sites you’re working on…

The site I’m working on right now is technically sound & so
most of the tech changes I implement are future-proofing

I focus more on content projects because that’s what delivers impact

I asked a bunch of people this question, and everyone said:

The technical changes I implement,
which I expect to work,
don’t always deliver an impact

All technical SEOs experience failure, just like digital PRs.

Right now, I imagine that some of you might be thinking: but a single technical SEO change is cheaper to implement than a single digital PR piece!

To that, I’d say: well yes, sometimes a single technical SEO change might incur a negligible cost, but that’s often not the case; plus the entire program of SEO activity is likely to incur significant costs.

Nevertheless, I’m not suggesting a single digital PR piece, or single technical SEO change are strictly comparable; obviously they aren’t the same. They do have one thing in common though: they both have a reasonably high chance of failure.

The difference is, we don’t expect every technical SEO change we implement to have an impact: we expect some level of failure; which begs the question: why aren’t we expecting some level of failure for digital PR too?

It’s unrealistic to expect that every digital PR piece we launch will generate 10+ pieces of linked coverage, just like it would be unrealistic to expect that every technical SEO change we make will have a positive impact.

I firmly believe that we need to end our obsession with the results of individual digital PR pieces – it’s fueling unrealistic expectations, and causing unrealistic goals to be set. Moreover, it’s incredibly difficult to tie meaningful business results to a single piece anyway.
(Even those pieces which generate 100+ pieces of linked coverage).

Rather than focusing on the results of individual pieces, much like technical SEO teams, we should be assessing the results we generate over a number of pieces, or a program of activity.

Most of all, we need to stop setting goals like this:

Every digital PR piece we launch should generate 10+ pieces of linked coverage

For clarity, I’m not saying you shouldn’t use something like this as a metric:

If a digital PR piece we create generates 10+ pieces of linked coverage we’ll consider it a success

And then have a goal like:

xx% of the digital PR pieces we create will generate 10+ pieces of linked coverage

I’m just saying that this absolutely shouldn’t be your goal:

100% of the digital PR pieces we create will generate 10+ pieces of linked coverage

I’ve been doing this stuff for more than a decade, and I still launch pieces that get no linked coverage. Don’t get me wrong, I don’t love it when that happens, but I’ve got to a place where I’m ok with it.

Because it’s not possible to eradicate failure.

Next time you launch a digital PR piece that generates no linked coverage, I’m hoping that you’ll remember that this stuff happens to everyone, and also that it happens more often than you probably think.

Want to see the deck in full? Here you go friends:

I want to take this opportunity to say THANK YOU to the amazing agencies and inhouse teams who generously shared all their data with me:
Aira, iTech Media, Kaizen, MacNaught Digital, NeoMam Studios, Propellernet, Search Intelligence, Seeker Digital, Shout Bravo, Verve Search, & Yard.

This talk would not have been possible without the feedback, love, and support of these wonderful humans – it takes a village, huh? I am beyond grateful, and feel very lucky to have you in my life:
Areej AbuAli, Gisele Navarro, Kirsty Hulse, Laura Crimmons, Lidia Infante, Sean Fitzsimons, Shannon McGuirk, Stacey MacNaught, Surena Chande, and Will Critchlow.

If you liked this post, you might also like my newsletter – sign up here.

Got questions, thoughts, or feelings about anything I’ve shared here? Drop me an email: or leave a comment below.

Leave a Reply