Earned Media Report
A structured monthly or campaign-period analysis of PR and earned media coverage — what ran, where, with what tone and messaging accuracy, and what it signals about your media relations effectiveness.
What it is
The Earned Media Report provides a structured, consistent way to analyse and report on press and media coverage. It goes beyond a simple clippings collection to ask the questions that actually matter: Are our key messages being reflected in coverage? Are we reaching the right outlets and audiences? Is coverage increasing or decreasing? What’s the quality and tone of what’s being written about us?
This template treats earned media as something to be measured and managed, not just accumulated. Volume matters, but message accuracy, outlet quality, and audience reach matter more. A single well-placed feature in a tier-one publication that accurately represents your positioning is worth more than ten passing mentions in low-relevance outlets.
It is designed for a regular monthly cadence but works equally well as a post-campaign review of earned media specifically. It separates well from the broader Campaign Performance Review — which covers all channels — when PR is a primary channel that warrants its own analysis.
When to use it
Use this template when:
- You run an active press and media relations programme and need to report on it regularly
- You want to assess whether your key messages are being picked up and reflected in coverage
- You’re presenting earned media results to a client, leadership, or board
- You need to demonstrate the value of PR investment with structured evidence
- You’re assessing media relationship health — which outlets and journalists are engaged vs. dormant
Don’t use this template when:
- You’re tracking all channels including owned and paid (use Campaign Performance Review or Simple Comms Dashboard instead)
- You want day-to-day media monitoring (use the Weekly Monitoring Brief for that)
- You have no active media relations programme and coverage is purely incidental
- You only had one or two pieces of coverage in the period (too light to warrant a full report; include in a weekly brief instead)
Inputs needed
- Full media coverage report from a monitoring tool, covering the reporting period
- Your defined target media list for this period (which outlets were you trying to reach?)
- The key messages or narratives you were pushing during this period
- Comparison data from previous equivalent period (or established baseline)
- Any press releases, pitch activity, journalist briefings, or embargoed releases from the period
- Optional: estimated audience reach or circulation data per outlet
The template
Earned Media Report
Period: [Month and year, or campaign name and dates] Prepared by: [Name] Organisation: [Name] Distribution: [Who receives this report]
Executive summary
| Metric | This period | Previous period | Change |
|---|---|---|---|
| Total pieces of coverage | ↑/→/↓ | ||
| Tier 1 outlet coverage | ↑/→/↓ | ||
| Positive/neutral coverage % | ↑/→/↓ | ||
| Negative coverage % | ↑/→/↓ | ||
| Key message pickup rate | ↑/→/↓ | ||
| Estimated total reach | ↑/→/↓ |
Summary statement: [2–3 sentences. What was the overall picture this period? Any significant shifts? What drove performance?]
Coverage breakdown
By tier/outlet type:
| Outlet tier | Volume | % of total | Change vs previous period |
|---|---|---|---|
| Tier 1 (national/international) | |||
| Tier 2 (sector/trade press) | |||
| Tier 3 (regional/niche) | |||
| Online/digital only | |||
| Broadcast | |||
| Total | 100% |
By tone:
| Tone | Volume | % of total |
|---|---|---|
| Positive | ||
| Neutral/factual | ||
| Mixed | ||
| Negative |
Notes on tone: [Any notable patterns in what’s driving positive or negative coverage]
Top coverage this period
List the five to ten most significant pieces of coverage, prioritising tier and reach over volume.
| Publication | Headline/story | Tier | Tone | Reach | Key messages included | Date |
|---|---|---|---|---|---|---|
| Yes / Partial / No | ||||||
Most valuable piece of coverage: [Publication and headline] — because: [Why this piece matters — reach, accuracy, tier, audience, or context]
Message pickup analysis
For each key message you were pushing this period, assess how well it was reflected in coverage.
| Key message | Coverage reflecting this message | Pickup rate | Accuracy of representation | Notes |
|---|---|---|---|---|
| [Message 1] | [# pieces] | High / Medium / Low | Accurate / Partial / Absent | |
| [Message 2] | [# pieces] | High / Medium / Low | Accurate / Partial / Absent | |
| [Message 3] | [# pieces] | High / Medium / Low | Accurate / Partial / Absent |
Best-landing message: [Which message was picked up most accurately and why]
Worst-landing message: [Which message failed to appear in coverage or was misrepresented, and likely reasons]
Journalist and outlet tracker
Active journalists this period (those who covered us):
| Journalist | Publication | Stories | Tone | Relationship status | Notes | |-----------|-------------|---------|------|--------------------|| | | | | | Active / Warm / New / Lapsed | | | | | | | | |
Target outlets with no coverage this period:
| Outlet | Target tier | Last coverage | Action needed |
|---|---|---|---|
| Pitch / Re-engage / Remove from list | |||
Proactive vs. reactive coverage
| Coverage type | Volume | % of total |
|---|---|---|
| Proactive (driven by our activity: press releases, pitches, briefings) | ||
| Reactive (journalist-initiated, in response to news or sector events) | ||
| Earned/unsolicited (mentions without direct outreach) |
Interpretation: [What does the proactive/reactive split tell us? Are we driving coverage or simply responding to events? Is this the right balance for this period?]
Issues and concerns
Negative coverage requiring monitoring:
- [Story/outlet]: [Brief description of what was said and why it’s a concern]
- [Story/outlet]: [Brief description]
Corrections or inaccuracies:
- [Any incorrect reporting that needs addressing — and whether/how it was corrected]
Journalists or outlets to handle carefully:
- [Any relationships requiring attention or approach adjustments]
Recommendations
For next period:
| Recommendation | Priority | Owner |
|---|---|---|
| High / Medium / Low | ||
Pitches or stories to pursue: [2–3 story angles or pitch opportunities identified from this period’s coverage patterns]
Relationship actions: [Which journalists or outlets need proactive attention — briefings, coffees, exclusives]
AI prompt
Base prompt
I need to analyse and report on our earned media coverage for [PERIOD].
Organisation: [NAME AND BRIEF DESCRIPTION]
Sector: [SECTOR]
Coverage data for this period:
[PASTE: your media monitoring export, list of articles with outlets, dates, and brief descriptions]
Key messages we were pushing this period:
1. [Message 1]
2. [Message 2]
3. [Message 3]
Comparison baseline (previous period):
[PASTE OR DESCRIBE: previous period volume and tone summary]
Please:
1. Assess the overall quality and reach of coverage — not just volume, but outlet tier and tone
2. Evaluate how well our key messages were reflected in coverage (what landed, what didn't)
3. Identify which outlets and journalists were most valuable this period
4. Flag any concerning coverage or patterns we should address
5. Recommend three priority actions for next period's media relations
Format this as a professional earned media report suitable for presenting to senior stakeholders.
Prompt variations
Variation 1: Message pickup analysis
I want to understand how accurately our key messages are being reflected in media coverage.
Our three key messages this period were:
1. [Message 1 — paste full wording]
2. [Message 2 — paste full wording]
3. [Message 3 — paste full wording]
Here are the main pieces of coverage from this period:
[PASTE: articles, headlines, and key quotes from coverage]
Please analyse:
1. Which messages are appearing in coverage and in what form — verbatim, paraphrased, or implied?
2. Which messages are being omitted, misrepresented, or contradicted?
3. What language are journalists actually using about us (which may differ from our language)?
4. What does this suggest about which messages are "sticky" vs. which need to be reworked?
5. Draft revised wording for any messages that are consistently landing incorrectly.
Variation 2: Post-announcement coverage review
We made an announcement on [DATE]: [DESCRIBE ANNOUNCEMENT]
Here's the coverage it generated:
[PASTE: articles, social commentary, analyst responses]
Our goals for this announcement were:
[DESCRIBE: what messages you wanted picked up, what audiences you were targeting, what outcomes you hoped for]
Please assess:
1. Did the announcement achieve the coverage goals? Be honest about gaps.
2. Which aspects of our messaging were picked up most faithfully?
3. What angles did journalists take that we didn't anticipate?
4. What does the coverage tell us about how the market perceives this news?
5. If we were to do this again, what would we do differently in our media approach?
Variation 3: Client reporting summary
I'm writing a monthly earned media report for a client. Here's the raw data:
Client: [NAME AND SECTOR]
Coverage this month: [PASTE COVERAGE LIST]
Key messages being pursued: [LIST]
Previous month's summary for comparison: [PASTE]
Please draft a 400-word executive summary of this month's earned media performance suitable for a client report. Tone should be professional and analytical — acknowledge where performance fell short, explain why, and recommend specific next steps. Avoid PR agency spin.
Variation 4: Outlet quality assessment
I want to assess the quality of our media coverage, not just the volume.
Here's our coverage list:
[PASTE: outlet names and article counts]
Please:
1. Categorise these outlets by tier (national/international, trade/sector, regional/niche, online-only)
2. Assess which outlets represent the most strategically valuable coverage for an organisation in [SECTOR]
3. Identify which outlets we're over-represented in versus under-represented in
4. Suggest 5 outlets we should be targeting where we're currently absent
Base your tier assessment on audience size, sector relevance, and credibility rather than just publication age or prestige.
Human review checklist
- Tier classification consistent: Outlet tiers are applied consistently across the report and the same as previous periods (no inflation of tier counts)
- Tone assessment objective: Positive/negative/neutral assessments reflect the actual content, not how we feel about the outlet
- Message pickup fairly assessed: A message that appeared as a brief mention isn’t counted the same as a piece built around our narrative
- Comparison is like-for-like: We’re comparing the same monitoring scope, outlet list, and period length as last time
- Negative coverage not buried: Issues and concerns are prominently visible, not tucked at the end in vague language
- Recommendations are specific: Actions name the outlet, the journalist, or the story — not just “do more proactive media”
- Attribution is accurate: Coverage claimed as “driven by our activity” genuinely was — not just coincident with it
- Reach estimates are consistent: We’re using the same source for audience/circulation data across all outlets
- Relationship status current: The journalist tracker reflects current relationship status, not wishful history
- Executive summary matches detail: The headline numbers in the summary are consistent with the data in the body of the report
Example output
Earned Media Report — March 2026 Prepared by: Communications team | Client: Nexus Energy Solutions
Executive summary
| Metric | March | February | Change |
|---|---|---|---|
| Total coverage | 34 | 27 | ↑ +26% |
| Tier 1 pieces | 6 | 3 | ↑ +100% |
| Positive/neutral % | 88% | 76% | ↑ |
| Key message pickup (avg) | 67% | 52% | ↑ |
| Estimated reach | 4.2m | 2.8m | ↑ |
March was the strongest month in Q1, driven primarily by the solar capacity announcement on 12 March, which generated six tier-one pieces including a feature in the Financial Times. Key message pickup improved significantly — the “cost reduction” message is now landing consistently; the “grid stability” message remains under-represented.
Top coverage
| Publication | Story | Tier | Tone | Reach | Key messages |
|---|---|---|---|---|---|
| Financial Times | ”Nexus doubles solar capacity in UK expansion push” | 1 | Positive | 1.2m | Yes (all three) |
| The Guardian | ”Renewables firm flags planning delays as growth risk” | 1 | Mixed | 890k | Partial |
| New Civil Engineer | ”Case study: Nexus grid integration approach” | 2 | Positive | 85k | Yes |
Message pickup
| Message | Pickup rate | Accuracy |
|---|---|---|
| ”Cost-competitive at scale” | High — 22/34 pieces | Accurate |
| ”Grid stability leadership” | Low — 4/34 pieces | Partial — often reduced to “grid improvements" |
| "UK manufacturing jobs” | Medium — 11/34 pieces | Accurate where used |
The “grid stability” message needs reworking — it’s technically accurate but failing to generate editorial interest. Recommend testing a more human-scale framing: “powering X homes reliably” rather than “grid stability contribution.”
Related templates
- Weekly Monitoring Brief — For day-to-day media tracking; this template aggregates and analyses what those briefs capture
- Monthly Stakeholder Update — Share a condensed version of earned media highlights with senior stakeholders
- Campaign Performance Review — For full multi-channel campaign analysis; this template covers earned media specifically
- Media Pitch Builder — Use when this report identifies message or angle gaps that require new pitches
- Competitive Intelligence Monitor — Run alongside this template to understand how your coverage compares to competitors’
Tips for success
Quality over quantity A 600-word feature in a sector-leading trade publication that accurately reflects your messaging is worth 20 brief mentions in tangential outlets. Build a tier system for your media list and weight your assessment accordingly. Your stakeholders will understand one high-quality piece better than a large clipping count.
Track message pickup, not just coverage Most PR reports celebrate volume. The more useful discipline is message accuracy: are journalists using your language, your narrative framing, your proof points? Or are they building different stories from your announcements? The gap between what you said and what ran is where communications improvement lives.
Maintain a live journalist relationship log The best earned media programmes are built on relationships. Treat the journalist tracker as a living document: note who covered you positively, who covered you critically, who’s been pitched but hasn’t run anything, who’s new to your beat. This context makes pitching smarter and faster.
Separate what you caused from what happened to you Some coverage will appear because you generated it. Some will appear because you were caught in a sector story. Some will appear because a journalist found you independently. Distinguishing between these is important — proactive media relations that generates sustained quality coverage is a different capability from simply having a good news flow.
Report on what didn’t happen too If you were targeting three tier-one outlets and only landed one, say so and explain why. If a product launch generated no media interest, that’s significant data. Honest reporting on gaps builds more credibility with stakeholders than a curated account of successes.
Common pitfalls
Measuring AVE (advertising value equivalent) AVE — the practice of calculating what the same space would have cost as advertising — has been widely discredited. It conflates earned media with paid media, over-values coverage in high-rate publications, and implies equivalence that doesn’t exist. Use reach, message pickup, and tier as your quality indicators instead.
Not reading the actual coverage It’s easy to generate a report from monitoring tool exports without reading what was written. Sentiment categorisation tools are imperfect. Message pickup can only be assessed by reading the article. Spend time with the actual coverage before drawing conclusions.
Treating all positive coverage as equal Positive coverage in an outlet your target audience doesn’t read has limited value. A glowing profile in a regional business paper is nice, but if your audience is national policy makers or FTSE 100 procurement teams, that outlet isn’t serving your communications objective. Value coverage by audience relevance, not just tone.
The “what ran” trap Good PR is as much about what didn’t run — a crisis story that was killed, a competitor attack that didn’t get traction, negative speculation that died — as what did. Include in your report instances where good media relations prevented negative coverage, even if this is harder to evidence.
Only reporting when it’s good If you only produce earned media reports after strong months, you lose the trend data that reveals what’s actually working. The report has most value as a consistent, recurring discipline — including the quiet months that reveal when the media relations programme needs attention.
Related templates
Need this implemented in your organisation?
Faur helps communications teams build frameworks, train teams, and embed consistent practices across channels.
Get in touch ↗