Skip to main content
Monitoring & Reporting Intermediate 30 minutes

Campaign Performance Review

Evaluate what your campaign delivered, why it performed as it did, and what you should do differently next time – in a structured format that's useful for both internal learning and stakeholder reporting.

Version 1.0 Updated 9 March 2026

What it is

The Campaign Performance Review is a structured end-of-campaign evaluation that goes beyond a simple metrics summary. It captures what the campaign set out to achieve, what it actually delivered, the reasons behind that performance, and clear recommendations for the future.

This template produces two things in one: a learning document for your team, and a stakeholder-ready summary of what the campaign delivered. By separating what happened from why it happened, it builds the analytical habits that improve communications over time – rather than just reporting numbers and moving on.

When to use it

Use this template when:

  • A discrete campaign has reached its planned end date
  • You’re presenting campaign results to leadership, a board, or a client
  • You’re preparing a budget submission or resource request and need evidence of past performance
  • You want to create an internal record before institutional knowledge walks out the door
  • A campaign significantly over- or under-performed and you need to understand why

Don’t use this template when:

  • You’re tracking an always-on programme rather than a discrete campaign (use the Monthly Stakeholder Update or Simple Comms Dashboard instead)
  • You’re mid-campaign and want to course-correct in real time (the Insights to Actions Template is better for that)
  • You don’t yet have access to the underlying data – wait until you do

Inputs needed

Before starting, gather:

  • Original campaign brief, including stated objectives and KPIs
  • Analytics exports from all active channels for the campaign period
  • Comparison data (previous equivalent campaign, or agreed benchmark)
  • Any qualitative signals: comments, survey responses, stakeholder feedback, media tone
  • Media coverage report if earned media was part of the mix
  • Notes from anyone who ran the campaign – what felt like it worked, what caused problems

The template

Campaign summary

FieldDetails
Campaign name
Campaign period
Campaign objective(s)
Target audience(s)
Channels used
Budget spent
Reviewed by
Date of review

Objectives and outcomes

For each objective set at the outset, record what was targeted and what was delivered:

ObjectiveKPI / metricTargetActualVarianceMet?
Yes / Partially / No

Overall assessment:

  • Campaign delivered on all primary objectives
  • Campaign delivered on most primary objectives with some gaps
  • Campaign partially delivered – significant gaps against objectives
  • Campaign did not deliver against primary objectives

Channel performance

ChannelKey metricTargetActualvs. BenchmarkNotes

Top-performing channel: _______ (reason: _______)

Lowest-performing channel: _______ (reason: _______)


Audience response

SignalObservation
SentimentPositive / Mixed / Negative – brief evidence
Most engaged content pieceTitle + metric
Most shared / amplified contentTitle + metric
Audience comments or feedback themesKey themes from comments, survey responses, inbound
Unexpected audience behaviourAnything that surprised you

Content performance

Content pieceChannelReach / ViewsEngagementClick-throughVerdict
Strong / Average / Weak

What content format worked best? (e.g., video, long-form, infographic, short copy)

What content format underperformed?


Timing and pacing

QuestionAnswer
Did the campaign launch on time?Yes / No – if no, what happened?
Was there a point in the campaign where momentum dropped?Yes / No – when and why?
Was the campaign period the right length?Too long / About right / Too short
Were there external timing factors that helped or hindered?

What worked

List the three to five things that most contributed to positive performance. Be specific – not “good content” but what about it worked and why.

What workedWhy it workedReplicate next time?
Yes / Adapt / No

What didn’t work

List the three to five things that underperformed or caused friction. Be honest – this section is where the learning lives.

What didn’t workWhy (hypothesis)Fix next time?
Yes / Remove / Investigate further

Recommendations

Based on this review, what are the specific actions for the next campaign?

RecommendationPriorityOwnerDeadline
High / Medium / Low

Single most important change for next time:


One-paragraph executive summary

Write this last, using the data above. Keep to 100 words maximum. This is what you share with people who won’t read the full review.

[Campaign name] ran from [dates] with the objective of [objective]. Against our primary KPI of [KPI], we delivered [result] against a target of [target]. Key drivers of performance were [2–3 factors]. The campaign underperformed in [area] because [brief reason]. For the next campaign, the priority change is [one recommendation]. Overall assessment: [Strong / On-target / Below target / Significantly below target].


AI prompt

Base prompt

I've just completed a communications campaign and want to write a performance review.

**Campaign basics:**
- Name: [CAMPAIGN NAME]
- Dates: [START] to [END]
- Objectives: [LIST OBJECTIVES]
- Channels: [LIST CHANNELS]
- Target audience: [AUDIENCE]

**Performance data:**
[PASTE YOUR METRICS – reach, engagement, click-through, conversions, media coverage, etc.]

**Benchmark / target:**
[PASTE TARGETS OR PREVIOUS CAMPAIGN BENCHMARKS]

**Qualitative observations:**
[INCLUDE ANY FEEDBACK, COMMENTS, STAKEHOLDER REACTIONS]

Please help me:
1. Assess performance against each objective – clearly stating what was met, missed, or exceeded
2. Identify the two or three factors that most drove performance (positively and negatively)
3. Highlight any patterns in the data worth drawing attention to
4. Draft three specific, actionable recommendations for the next campaign

Format your response as a structured performance review with clear section headers.

Prompt variations

Variation 1: Stakeholder summary

Based on this campaign performance data:
[PASTE KEY METRICS]

Objectives were:
[LIST OBJECTIVES AND RESULTS]

Please write a 150-word executive summary of campaign performance suitable for a board or senior leadership audience. Tone should be clear and direct – acknowledge what didn't meet target and explain why, without being defensive. End with one forward-looking recommendation.

Variation 2: Root cause analysis

Our campaign delivered [RESULT] against a target of [TARGET] – significantly [below / above] what we planned.

Here's what we know about performance by channel and content type:
[PASTE DATA]

Help me work through the likely causes. For each hypothesis, rate how probable it is (High / Medium / Low) and what evidence would confirm or rule it out. I want to identify the root cause, not just the symptoms.

Variation 3: Learning extraction

I want to extract the most useful learning from this campaign for future planning.

What we tried: [BRIEF CAMPAIGN DESCRIPTION]
What the data shows: [KEY METRICS]
What surprised us: [UNEXPECTED RESULTS OR BEHAVIOURS]

Please help me:
1. Identify which findings are likely to be repeatable insights (vs. one-off results specific to this campaign)
2. Frame three learning statements in the format: "When we [did X], [result], which suggests [implication for future]"
3. Flag any areas where the data is too limited to draw confident conclusions

Variation 4: Budget justification

I need to present campaign ROI to justify the communications budget for [NEXT PERIOD].

Campaign investment: [BUDGET]
Campaign results: [KEY OUTCOMES – reach, leads, coverage, etc.]
Business context: [ANY RELEVANT BUSINESS OUTCOMES THAT FOLLOWED]

Help me frame the value delivered in a way that connects communications outputs to business outcomes. Include both quantitative evidence and qualitative value where hard numbers are limited. Anticipate and address the likely sceptical questions a finance audience would ask.

Tips for better AI output:

  • Paste actual numbers rather than describing them – the analysis will be sharper
  • Include your original targets alongside actuals; without them, the AI can’t assess whether results were good or not
  • Specify your audience for the review (internal team vs. board vs. client) and the AI will adjust tone accordingly
  • If you had unexpected results in either direction, say so explicitly – that’s often where the most useful analysis lives

Human review checklist

  • Completeness – Have you included data from all channels used in the campaign, not just the ones that performed well?
  • Objective alignment – Are you measuring against the objectives set at the outset, not retrofitting objectives to match what you delivered?
  • Honesty – Does the review accurately represent underperformance, with explanation? A review that only highlights positives has limited value
  • Causation vs. correlation – Have you distinguished between things that correlated with good performance and things that caused it?
  • External factors – Have you accounted for things outside your control (competitor activity, news cycles, platform algorithm changes) that affected results?
  • Actionability – Are your recommendations specific enough that someone could act on them? “Improve content quality” is not actionable; “test shorter video formats under 60 seconds” is
  • Audience appropriateness – If sharing with senior stakeholders, have you front-loaded the executive summary and buried the detail?
  • Data accuracy – Have you sense-checked your numbers against platform analytics directly, not from memory or secondhand sources?

Example output

Campaign: Annual membership renewal drive for a professional association Period: October–November 2025 | Budget: £12,000

Objectives vs. outcomes:

ObjectiveTargetActualMet?
Renewal rate75%71%Partially
Email open rate28%34%Yes
Early renewal conversions200312Yes (exceeded)
Lapsed member reactivation5023No

What worked: Personalised subject lines (34% open rate vs. 22% industry average). Early-bird incentive drove 312 early renewals – 56% above target. SMS reminder in week 3 generated a spike in same-day renewals.

What didn’t work: Lapsed member reactivation significantly underperformed. The re-engagement email sequence used the same tone and content as active-member comms – no acknowledgement of the lapse or a reason to return.

Top recommendation: Develop a separate lapsed-member sequence with distinct messaging that acknowledges the gap and addresses the most common reasons for non-renewal, identified through exit survey data.

Note: This is an illustrative example. Your review will reflect your specific campaign and context.



Tips for success

Set the review date when you set the campaign launch date The most common reason campaign reviews don’t happen is that there’s no agreed time to do them. Block the review session in diaries when you set the campaign calendar, before anyone moves on to the next thing.

Brief the review before you brief the next campaign It’s tempting to immediately start planning the next campaign. Resist. The performance review should be the first input to the next brief – not an afterthought completed months later when context has faded.

Separate data collection from interpretation Don’t try to gather metrics and draw conclusions simultaneously. Pull the numbers first, sit with them, then interpret. This reduces the risk of unconsciously cherry-picking data to support a narrative you’ve already decided on.

Ask “why” at least three times The first answer is usually a symptom. “Engagement was low” → why? “The content wasn’t resonating” → why? “We used the same format as last time without testing whether the audience still responds to it” → now you have something actionable.

Protect the honesty of the review Reviews presented to senior stakeholders have a natural pull towards good news. Be explicit with yourself and your team about which sections are for external sharing and which are internal learning. The learning sections need to be honest to be useful – even if the executive summary is more curated.


Common pitfalls

Measuring what’s easy to measure, not what matters Reach and impressions are easy to pull. Behaviour change, perception shift, and decision influence are harder to measure but are often what the campaign was actually trying to achieve. Be honest about what you can and can’t measure, and don’t substitute vanity metrics for meaningful ones.

The survivorship bias trap Reviewing a campaign by focusing only on what worked is like a pilot who only studies successful landings. The most important lessons are often in the underperformance. Give equal time and rigour to the “what didn’t work” section.

Attributing too much to the campaign Campaigns don’t operate in a vacuum. A spike in website traffic might be driven by a media story, a competitor’s PR crisis, or a seasonal pattern – not your LinkedIn posts. Always check for confounding factors before attributing results to campaign activity.

Generic recommendations “Do more video” or “improve targeting” are not recommendations – they’re placeholders. A useful recommendation names the specific change, explains why it’s expected to help, and identifies who will act on it. If you can’t write it specifically, you haven’t finished the analysis yet.

Burying the headline Senior stakeholders will not read a 15-page report. They’ll read the executive summary. Write the summary last, but present it first. Make sure it accurately represents the detail – not just the highlights.


Need help building a measurement framework that makes campaign reviews easier from the start? Faur works with organisations to develop communications measurement approaches that connect activity to outcomes.

Related templates

Need this implemented in your organisation?

Faur helps communications teams build frameworks, train teams, and embed consistent practices across channels.

Get in touch