Sentiment Deep Dive
A structured framework for going beyond surface-level sentiment scores to understand why sentiment is moving, who is driving it, and what it means for how you communicate — used when a shift or pattern demands deeper analysis.
What it is
Sentiment scores are useful as a signal. They’re not useful as an explanation. When your weekly brief shows sentiment declining, or when positive sentiment appears to be building around something specific, the score tells you something changed — but not what changed, why, or whether it requires action.
The Sentiment Deep Dive is the analytical tool you reach for when a sentiment signal requires investigation. It moves you through four questions in sequence: What is the sentiment picture? Who is driving it? Why is it happening? What should we do about it?
It draws heavily on qualitative analysis — the actual words and phrases people are using, the narratives being constructed, the specific grievances or endorsements being expressed — rather than treating sentiment as a purely quantitative measure. A 7% decline in sentiment is not inherently more important than three influential voices shifting how they talk about you. Context, source quality, and narrative content matter as much as the number.
This template is deliberately triggered rather than scheduled. You don’t need to complete a sentiment deep dive every month. You need it when your monitoring surfaces a pattern that the weekly brief can’t adequately explain.
When to use it
Use this template when:
- Your weekly monitoring brief flags a sentiment shift that isn’t explained by an obvious single event
- Sentiment has been declining consistently over 2–4+ weeks rather than showing a one-off spike
- A previously positive narrative appears to be weakening without clear cause
- Different audience segments are showing divergent sentiment and you need to understand why
- Leadership asks you to explain why public perception seems to be shifting
- You’re considering a messaging change and want to ground it in evidence rather than intuition
Don’t use this template when:
- You’re in active crisis and need rapid response (use your crisis playbook — you don’t have time for deep analysis)
- You’ve had a single bad day of coverage with obvious cause (a one-off event, a negative review)
- Your monitoring data is too thin to draw meaningful conclusions (too few mentions, too short a period)
- You need the sentiment score for a routine report (the weekly brief handles that)
Inputs needed
- At least 2–4 weeks of sentiment data, ideally longer for trend analysis
- Platform-level breakdown if available (LinkedIn vs. Twitter/X vs. media vs. other)
- A meaningful sample of actual content — don’t rely on the score alone; read what people are saying
- Context log: what communications activity, external events, or business announcements occurred in this period
- Previous period or baseline for comparison
- Audience segmentation data if available (customer vs. employee vs. investor vs. general public)
The template
Sentiment Deep Dive
Analysis period: [Start date] to [End date] Topic/focus: [What sentiment are we analysing — organisation overall, a specific issue, a campaign, a spokesperson?] Completed by: [Name] Triggered by: [What prompted this analysis — monitoring alert, leadership question, weekly brief flag]
1. The sentiment picture
Overall sentiment in the analysis period:
| Metric | Analysis period | Comparison baseline | Change |
|---|---|---|---|
| Positive % | ↑/→/↓ | ||
| Neutral % | ↑/→/↓ | ||
| Negative % | ↑/→/↓ | ||
| Total volume | ↑/→/↓ | ||
| Net sentiment score (positive minus negative) | ↑/→/↓ |
Sentiment trend within the analysis period: [Describe the arc: did it shift at a specific point? Is it a gradual decline or sudden drop? Is it recovering or still moving?]
Platform/channel breakdown:
| Channel | Sentiment | Volume | Notable patterns |
|---|---|---|---|
| Media (earned) | Pos / Neu / Neg | ||
| Pos / Neu / Neg | |||
| Twitter/X | Pos / Neu / Neg | ||
| Other social | Pos / Neu / Neg | ||
| Reviews/forums | Pos / Neu / Neg |
Is sentiment uniform or concentrated? [Is this a broad-based shift, or is it isolated to one channel, one audience segment, one geography, or one specific topic?]
2. Who is driving the sentiment
Sentiment by audience type (if data allows):
| Audience segment | Sentiment direction | Volume share | Assessment | |----------------|--------------------|--------------|| | Customers/consumers | | | | | Employees/internal | | | | | Industry/sector voices | | | | | Media/journalists | | | | | Investors/analysts | | | | | Activists/advocacy groups | | | | | General public | | | |
Key voices driving negative sentiment:
| Voice/account/outlet | Platform | Reach/influence | What they’re saying | Motivation (if known) |
|---|---|---|---|---|
| High / Medium / Low | ||||
Key voices driving positive sentiment:
| Voice/account/outlet | Platform | Reach/influence | What they’re saying | Motivation (if known) |
|---|---|---|---|---|
| High / Medium / Low | ||||
Is this sentiment organic or organised? [Is negative sentiment developing naturally, or does it show signs of coordination — same language, timing patterns, linked accounts? Is positive sentiment organic or driven by amplification from allies?]
3. Why it is happening — the narrative layer
Dominant negative narratives: (What stories or frames are people using to express negative sentiment? Quote directly where possible)
-
Narrative 1: [What is being said, in what language, by which voices]
- Representative quotes: “[direct quote]” — [source/context]
- Core grievance: [What is the underlying concern or criticism]
-
Narrative 2: [Repeat format]
Dominant positive narratives: (What stories or frames are generating positive sentiment?)
- Narrative 1: [What is being said, in what language]
- Representative quotes: “[direct quote]” — [source/context]
- Core driver: [What is generating this positive view]
Gap between what we say and what people hear: [Compare your current key messages with the language appearing in sentiment content. Where is there a gap? Where are your messages landing and where are they failing to appear?]
Trigger mapping:
| Date | Event / activity | Sentiment effect | Magnitude |
|---|---|---|---|
| Positive / Negative / Neutral impact | Large / Medium / Small | ||
Root cause assessment: [Given all the above: what is the most likely underlying cause of the sentiment pattern? Push past surface description. “People are unhappy about our pricing” is a description; “People feel the pricing change was made without explanation and feel disrespected” is a root cause.]
4. What it means — implications and response
Signal strength assessment:
| Question | Assessment |
|---|---|
| Is this sentiment pattern based on a wide enough sample to be meaningful? | Yes / Borderline / No |
| Is this a trend or a temporary spike? | Trend (2+ weeks) / Spike / Unclear |
| Are the voices driving it influential in our key stakeholder groups? | Yes / Partially / No |
| Is it spreading (growing in volume and reach)? | Growing / Stable / Fading |
| Does it reflect a genuine underlying issue or a perception gap? | Genuine issue / Perception gap / Both / Unclear |
Overall signal strength: Strong / Moderate / Weak
Should we respond?
- Yes — the pattern is strong, spreading, and driven by legitimate concern. Inaction has escalation risk.
- Possibly — the signal is moderate; we should monitor closely and prepare response options.
- No — the signal is weak or fading; response would amplify rather than resolve.
If responding — what kind of response?
- Public messaging adjustment (change what we say and where)
- Direct stakeholder engagement (reach specific voices or groups)
- Narrative correction (address a specific false or misleading claim)
- Internal change first, then communicate (the sentiment reflects a genuine issue that needs fixing, not just explaining)
- Listen and absorb (no communication response; use this data to inform future messaging)
Recommended actions
| Action | Priority | Owner | Timeline |
|---|---|---|---|
| High / Medium / Low | |||
If messaging needs to change — draft direction: [Describe the messaging shift needed. Don’t draft the copy here, but capture the intent: what should we say differently, to whom, and why?]
What to monitor next: [What specific signals would tell us whether the situation is improving, worsening, or stabilising? Set a review date.]
Review date: [When will you assess whether the situation has changed]
AI prompt
Base prompt
I need to go deeper on a sentiment pattern I've noticed in our monitoring. This isn't just about the score — I want to understand what's driving it and what we should do about it.
Organisation: [NAME AND BRIEF SECTOR CONTEXT]
Analysis period: [DATES]
Sentiment pattern observed: [DESCRIBE: what the sentiment data shows — direction, magnitude, timing]
Sample content driving the sentiment:
[PASTE: 10–20 representative examples — quotes, posts, article excerpts, comments — that represent the sentiment pattern]
Events and context during this period:
[DESCRIBE: what communications activity, announcements, or external events occurred that might be relevant]
Our current key messages on this topic:
[LIST: what we're currently saying]
Please analyse:
1. What are the dominant narratives or frames people are using — not just "negative" but specifically what story are they telling?
2. Which voices or segments are driving this, and how influential are they?
3. What is the most likely root cause — not the surface description, but the underlying driver?
4. Is the gap between what we say and what people hear a messaging problem, a substance problem, or both?
5. What should we do — and equally importantly, what should we not do?
Be direct. If the data suggests a genuine problem we need to fix, say so. If it suggests a perception gap we can address through communications, say that. If it suggests we should do nothing, make that case.
Prompt variations
Variation 1: Audience segment comparison
I have sentiment data broken down by different audience segments and I need to understand why they're diverging.
Segment A [e.g., Customers]: [SENTIMENT SUMMARY AND SAMPLE QUOTES]
Segment B [e.g., Employees]: [SENTIMENT SUMMARY AND SAMPLE QUOTES]
Segment C [e.g., Industry media]: [SENTIMENT SUMMARY AND SAMPLE QUOTES]
My current messaging is consistent across all three groups: [DESCRIBE CURRENT APPROACH]
Please help me understand:
1. Why might these segments be perceiving us so differently?
2. Are the concerns or endorsements specific to each segment, or are there common underlying themes?
3. Should we differentiate our messaging by segment, or is there a unified framing that addresses all three?
4. Which segment's sentiment should we prioritise addressing, and why?
Be specific about which segments are most likely to affect our strategic position.
Variation 2: Narrative forensics
I want to understand the exact language and frames people are using to talk about us — not just whether it's positive or negative, but the specific stories they're telling.
Here's a sample of content from our monitoring:
[PASTE: 15–25 posts, comments, articles with direct quotes where possible]
Please:
1. Identify the 3–5 distinct narratives appearing in this content (e.g., "company that doesn't listen", "reliable but expensive", "doing good things but not communicating them well")
2. For each narrative, estimate how widely it appears across the sample
3. Identify which narratives are most potentially damaging to our positioning and why
4. Identify which narratives we could strengthen or amplify because they're positive and accurate
5. Suggest specific language we could use in our own communications to engage with the narratives people are actually constructing, rather than the ones we wish they'd construct
Variation 3: Sentiment trigger analysis
I've noticed a specific point in time when sentiment shifted. I want to understand what triggered it.
Sentiment before [DATE]: [SUMMARY]
Sentiment after [DATE]: [SUMMARY]
What happened around that date: [LIST EVENTS, ANNOUNCEMENTS, EXTERNAL STORIES]
Content from before the shift:
[PASTE: sample quotes/posts]
Content from after the shift:
[PASTE: sample quotes/posts]
Please:
1. What most likely triggered the shift — is it one specific event, or a combination of factors?
2. What is the evidence that points to this trigger rather than others?
3. Is the shift likely to be temporary (tied to a specific event that will fade) or structural (something has changed in how people perceive us)?
4. What response, if any, would address the root cause rather than just the symptom?
Variation 4: Positive sentiment analysis
Our sentiment has improved significantly in the past [PERIOD]. I want to understand why so we can sustain and build on it.
Sentiment data:
[PASTE: sentiment scores, volume trends]
Sample of content generating positive sentiment:
[PASTE: representative examples]
Our communications activity in this period:
[DESCRIBE: what we've been doing differently, if anything]
Please help me understand:
1. What specifically is driving the positive shift — is it our messaging, our actions, external context, or something else?
2. Which audiences are most positively engaged and what is resonating with them specifically?
3. Is this positive sentiment likely to be sustainable, or is it tied to a specific context that will change?
4. What are the 2–3 things we should do more of to sustain and extend this positive pattern?
5. Are there any risks in the positive sentiment data — narratives being attributed to us that aren't accurate or that create expectations we can't meet?
Human review checklist
- Sample is representative: The content reviewed includes a range of voices and sources, not just the most obvious or extreme examples
- Trends distinguished from spikes: The analysis correctly identifies sustained patterns rather than treating a one-day surge as a trend
- Quantitative and qualitative aligned: The sentiment score and the narrative analysis tell a consistent story; if they conflict, the conflict is acknowledged
- Root cause pushed beyond surface: The “why” section has gone deeper than describing what people said — it has attempted to explain the underlying driver
- Source credibility assessed: High-reach negative voices are noted but so is their credibility — a conspiracy-adjacent account with 50k followers isn’t the same as a respected sector journalist
- Gap analysis completed: We’ve compared our messages to the language actually appearing in sentiment content
- Signal strength honestly assessed: The template prompts an honest assessment of whether this sentiment matters — and we’ve answered truthfully rather than defaulting to concern
- Response recommendation justified: If we’re recommending action, the case for it is stronger than “there is negative sentiment”; if we’re recommending no action, we’ve explained why
- Specific monitoring specified: We’ve identified what to watch and set a review date, not just said “continue monitoring”
- Leadership-ready: The analysis is clear enough to present to a CEO or board without the full data appendix
Example output
Sentiment Deep Dive Analysis period: 1–28 February 2026 Topic: Organisation-wide sentiment following pricing announcement Completed by: Communications team Triggered by: Weekly brief flagged 12pp drop in positive sentiment over two consecutive weeks
The sentiment picture
| Metric | February | January baseline | Change |
|---|---|---|---|
| Positive % | 48% | 61% | ↓ -13pp |
| Neutral % | 31% | 27% | ↑ |
| Negative % | 21% | 12% | ↑ +9pp |
| Net sentiment | +27 | +49 | ↓ -22 |
Sentiment dropped sharply on 7 February (announcement date) and has not recovered. The decline is not a spike — it has been sustained across four weeks, suggesting a structural shift rather than a passing reaction.
Who is driving it
The negative sentiment is primarily driven by existing customers (particularly SME segment) and is most concentrated on Twitter/X and LinkedIn. Enterprise customers and analysts are largely neutral — their concerns appear to be “wait and see” rather than actively negative. The customer segment is influential for future acquisition as potential customers read reviews.
Key negative voices: Three customer accounts with followings of 5k–22k who have posted repeated critical threads. Two industry bloggers who have framed the story as “X abandons its original mission.” These six voices account for an estimated 40% of negative mentions.
Why it is happening
Dominant narrative: “We used to be priced for organisations like ours. Now we’re priced for enterprises. They’ve changed who they care about.”
This is not primarily about price itself — it’s about identity and belonging. SME customers aren’t just saying the price is too high; they’re saying the company has moved away from them. The core grievance is feeling deprioritised, not overcharged.
Gap between our message and what people heard: We communicated the new pricing as an “evolution that reflects the value we deliver.” Customers heard “we think we’re worth more, and if you can’t afford it, we’re not for you.” The substance-to-perception gap is significant.
Root cause: The pricing announcement didn’t explain the rationale in terms that connected to customer value. It was presented as a product/commercial decision, not a customer-focused communication. The “what’s in it for the customer” framing was absent.
Recommended actions
-
Publish a frank customer communication explaining the pricing rationale with emphasis on what has been enhanced for customers — not just a defensive “here’s why prices went up” but “here’s what we’ve invested to deserve this price”. Priority: High. Owner: CMO and Customer Communications lead. Deadline: 5 March.
-
Brief the six high-reach negative voices individually. Acknowledge their frustration directly; offer product team conversation. Priority: High. Owner: Community manager. Deadline: 3 March.
-
Review whether the SME pricing tier needs adjustment, and if so, make communications part of the fix — not a way to manage perception of an unchanged problem. Priority: High. Owner: CMO and CFO. Deadline: 10 March.
Related templates
- Weekly Monitoring Brief — Where sentiment patterns that trigger this deep dive are first surfaced
- Issue Log Tracker — If sentiment analysis reveals an escalating issue, move it into the issue tracker
- Insights to Actions Template — For converting the conclusions of this analysis into specific communications decisions
- Key Messages Grid — Use when this analysis reveals a gap between what you say and what lands
- Real-Time Crisis Response Playbook — If sentiment deep dive identifies escalation risk
Tips for success
Read the content, not just the score Sentiment classification tools are imperfect. The most important work in a sentiment deep dive is reading what people actually wrote — the language, the specific complaints or endorsements, the emotional register. AI-classified “negative” content can range from mild disappointment to active campaign. You can’t tell from the score.
Look for the underlying emotion, not just the stated grievance People rarely say exactly what they mean in public posts. “Your pricing is outrageous” might mean “I feel disrespected.” “This company has sold out” might mean “I feel abandoned.” The underlying emotional state is often where the real communications need lives. Address that, not just the surface complaint.
Distinguish between volume and influence High-volume negative sentiment from low-influence accounts has a different risk profile to low-volume negative sentiment from high-influence accounts. A single sharp-tongued industry analyst or a respected journalist consistently taking a critical view is more strategically significant than 200 frustrated comments on a Reddit thread.
Be honest about what you don’t know Sentiment analysis is imperfect. Some patterns have clear explanations; others don’t. It’s more useful to say “the data is ambiguous here” than to construct a plausible-sounding explanation that might be wrong. If you’re uncertain about root cause, say so, and identify what additional information would help.
The goal is insight, not justification The most common misuse of sentiment analysis is using it to justify a predetermined conclusion — gathering evidence that supports the action already decided on, and ignoring data that contradicts it. The value of this template is in discovering what’s actually true. That sometimes means the data tells you something uncomfortable.
Common pitfalls
Treating sentiment as the objective Sentiment is a signal about how communications and substance are landing with audiences. It’s not an objective in itself. Chasing sentiment score improvements through positive amplification campaigns while ignoring underlying issues is managing the metric, not the reality.
Confirmation bias in sample selection When reviewing content to illustrate sentiment, it’s natural to reach for examples that confirm what you already suspect. Actively look for content that contradicts your working hypothesis. If your sample doesn’t include counter-examples, your analysis is probably incomplete.
Over-attributing to communications Sometimes sentiment shifts for reasons that have little to do with communications: a competitor makes a move that benefits them, a sector narrative shifts, economic conditions change. Not every sentiment movement is something communications caused or can fix. Diagnose accurately before prescribing.
The false precision trap A 3-point shift in net sentiment score is not necessarily statistically significant, particularly at low volume. Don’t over-interpret small movements or present marginal data as decisive evidence. The qualitative narrative layer is often more informative than precision sentiment percentages at typical monitoring volumes.
Forgetting the positive Most sentiment deep dives are triggered by negative signals. But positive sentiment patterns are equally worth understanding — particularly when sentiment improves unexpectedly. What’s working is as important as what isn’t. Build the discipline of investigating positive patterns with the same rigour.
Related templates
Need this implemented in your organisation?
Faur helps communications teams build frameworks, train teams, and embed consistent practices across channels.
Get in touch ↗