AI Readiness Assessment
A structured diagnostic for assessing how ready a communications team or organisation is to adopt AI-powered workflows — covering technology, skills, processes, governance, and culture.
What it is
The AI Readiness Assessment is the diagnostic that should precede any serious AI adoption in a communications function. It evaluates readiness across five dimensions: technology infrastructure, team skills and knowledge, workflow structure, governance and risk management, and organisational culture.
Most AI adoption fails not because the technology doesn’t work, but because the organisation adopting it wasn’t ready — workflows weren’t defined clearly enough to improve, teams weren’t skilled or motivated enough to use tools well, governance wasn’t clear enough to manage risk, or leadership commitment wasn’t strong enough to sustain change through the difficult early phases.
This assessment produces a readiness profile: a clear picture of where you’re strong, where you’re weak, and where the smartest starting points for AI adoption are. It is designed to be honest rather than reassuring. An inflated readiness assessment leads to overambitious adoption plans that fail in implementation. An accurate one leads to sequenced, manageable progress.
Used at the start of an AI programme and repeated 6–12 months later, it also functions as a benchmark — showing what has changed as capability has been built.
When to use it
Use this template when:
- You’re beginning to explore AI adoption in your communications function and want a clear baseline
- Leadership is asking “are we ready for AI?” and you want a structured answer rather than an opinion
- You’re conducting a Faur consulting engagement that includes an AI capability diagnostic
- You’re preparing a business case for AI investment and need to articulate the current state
- You’ve been through an initial AI adoption phase and want to assess how much has changed
Don’t use this template when:
- You’re evaluating a specific AI tool (use the AI Tool Evaluation Framework instead)
- You’re mapping specific workflows for AI integration (use the Comms Workflow Audit)
- You’re assessing individual team member skills (use the Capability Gap Analysis)
- You have no intention of acting on the assessment — don’t run a diagnostic purely for appearance
Inputs needed
- Honest input from people who know the technology, processes, and culture — not just leadership’s view
- Time to explore each dimension thoughtfully rather than tick-boxing
- If conducting for a client, pre-assessment interviews with 3–5 team members across seniority levels will significantly improve accuracy
- Access to or description of: current tool stack, team structure, workflows, governance policies
The template
AI Readiness Assessment
Organisation assessed: [Name] Communications function: [Team/department] Assessment completed by: [Name/organisation] Date: [Date] Assessment approach: [Self-assessment / Facilitated workshop / Interviews + review]
How to use this assessment
Score each item on a 1–4 scale:
- 1 — Not in place: This doesn’t exist or applies to almost none of the team
- 2 — Early stage: Something exists but it’s patchy, informal, or limited in scope
- 3 — Developing: Reasonably established but room for significant improvement
- 4 — Strong: Well-established, consistent, and functioning effectively
Scores are a guide to conversation, not a definitive verdict. The notes and observations that accompany each score are more valuable than the number itself.
Dimension 1: Technology infrastructure
How well-equipped is the function from a technology perspective to add AI tools and integrate them into existing systems?
| Item | Score (1–4) | Notes |
|---|---|---|
| Clear picture of current technology stack (what tools are in use, by whom, for what) | ||
| Existing tools are consistently used across the team (not fragmented adoption) | ||
| Content management and storage is organised and accessible (not siloed on individuals’ hard drives) | ||
| Data and analytics infrastructure in place (you can measure what you do) | ||
| IT / security policies around new software adoption are clear and manageable | ||
| Budget available for tool investment | ||
| Procurement and vendor management processes can support SaaS tool adoption |
Technology dimension total: [Sum] / 28 Technology dimension average: [Avg] / 4
Key technology strengths:
Key technology gaps:
Technology readiness assessment:
- Strong foundation — AI tools can be integrated with minimal infrastructure work
- Workable — some gaps but manageable alongside AI adoption
- Significant gaps — technology infrastructure needs attention before or alongside AI adoption
- Not ready — fundamental technology foundations need building first
Dimension 2: Skills and knowledge
Does the team have the skills to use AI tools effectively — including prompt literacy, critical evaluation of AI outputs, and understanding of AI capabilities and limits?
| Item | Score (1–4) | Notes |
|---|---|---|
| Team has practical experience using at least one AI tool for work tasks | ||
| Team can write effective prompts that produce useful outputs (not just basic instructions) | ||
| Team critically evaluates AI outputs rather than accepting them uncritically | ||
| Team understands what AI is good at vs. where it produces unreliable outputs | ||
| At least one person in the team has taken AI-specific training or professional development | ||
| Team is able to articulate when AI should and shouldn’t be used for a given task | ||
| Team can recognise AI-generated content risks (hallucination, bias, voice inconsistency) |
Skills dimension total: [Sum] / 28 Skills dimension average: [Avg] / 4
Key skills strengths:
Key skills gaps:
Skills readiness assessment:
- Strong — team has solid foundations; advanced capability building can begin
- Developing — basics are in place; structured training will accelerate readiness
- Early stage — significant skills development needed before broad AI adoption
- Low — foundational AI literacy is absent; start here before anything else
Dimension 3: Workflow and process clarity
Are workflows clear and documented enough that AI can meaningfully improve them? AI improves defined processes; it amplifies unclear ones.
| Item | Score (1–4) | Notes |
|---|---|---|
| Core communications workflows (brief to delivery, approval process, content creation) are defined, not ad hoc | ||
| The team knows where its biggest time bottlenecks are | ||
| Repetitive tasks that consume significant time have been identified | ||
| Content briefs and templates are used consistently (not starting from scratch every time) | ||
| Roles and responsibilities in the content production workflow are clear | ||
| There is a consistent review and approval process for content | ||
| The team can describe their content production process step-by-step |
Workflow dimension total: [Sum] / 28 Workflow dimension average: [Avg] / 4
Key workflow strengths:
Key workflow gaps:
Workflow readiness assessment:
- Strong — workflows are clear enough that AI can be mapped onto specific tasks
- Developing — core workflows exist but need documentation before AI integration
- Early stage — workflows are too ad hoc; process definition needed first
- Low — the function operates largely on individual judgement; process work must precede AI
Dimension 4: Governance and risk management
Does the function have appropriate guardrails for AI use — covering quality, accuracy, compliance, brand consistency, and ethical considerations?
| Item | Score (1–4) | Notes |
|---|---|---|
| Clear brand voice guidelines exist that AI outputs can be checked against | ||
| Content approval processes include review of factual accuracy | ||
| The function has considered (even informally) what AI should and shouldn’t be used for | ||
| There is awareness of the legal and compliance implications of AI-generated content (copyright, data, disclaimers) | ||
| AI use policies exist or are in development (either for this team or organisation-wide) | ||
| There is a process for checking AI outputs before publication | ||
| The function has thought about how to disclose AI use where appropriate |
Governance dimension total: [Sum] / 28 Governance dimension average: [Avg] / 4
Key governance strengths:
Key governance gaps:
Governance readiness assessment:
- Strong — guardrails are in place; AI can be adopted with confidence in risk management
- Developing — some governance exists; gaps can be addressed alongside adoption
- Early stage — governance framework needs building as a priority
- Low — significant governance risk; adoption without guardrails could damage reputation or compliance
Dimension 5: Culture and leadership
Is the environment conducive to AI adoption — with leadership support, team willingness to change, and a culture that can sustain new ways of working?
| Item | Score (1–4) | Notes |
|---|---|---|
| Leadership explicitly supports and champions AI adoption in communications | ||
| The team is motivated to try new approaches rather than defending current ways of working | ||
| Previous change or technology adoption in this function has generally succeeded | ||
| There is psychological safety to experiment, make mistakes, and learn | ||
| AI adoption has visible senior sponsorship with allocated time and resource | ||
| The team sees AI as an opportunity rather than a threat to jobs or expertise | ||
| There is tolerance for the early inefficiency that comes with learning new tools |
Culture dimension total: [Sum] / 28 Culture dimension average: [Avg] / 4
Key culture strengths:
Key culture gaps:
Culture readiness assessment:
- Strong — culture will accelerate adoption; this is a genuine advantage
- Developing — culture is broadly supportive but will need active management
- Mixed — some pockets of enthusiasm, some resistance; change management is critical
- Resistant — cultural barriers are the primary obstacle; address before technical adoption
Overall readiness profile
| Dimension | Score /28 | Average /4 | Readiness level |
|---|---|---|---|
| Technology | Strong / Developing / Early / Not ready | ||
| Skills | Strong / Developing / Early / Low | ||
| Workflow | Strong / Developing / Early / Low | ||
| Governance | Strong / Developing / Early / Low | ||
| Culture | Strong / Developing / Mixed / Resistant | ||
| Total | /140 | /4 |
Overall readiness level:
- Ready to accelerate (average 3.5+): Strong foundations across most dimensions. Move into structured adoption with confidence.
- Ready to begin (average 2.5–3.4): Most dimensions are workable. Start with high-readiness areas; address gaps in parallel.
- Needs preparation (average 1.5–2.4): Significant gaps in multiple dimensions. Build foundations before broad adoption.
- Foundational work required (average below 1.5): AI adoption is premature. Focus on the identified foundations first.
Highest-readiness starting points
Based on the assessment, identify the 2–3 areas where AI adoption could begin with least friction and highest likelihood of early success.
| Starting point | Rationale | Suggested first AI use case |
|---|---|---|
Priority gaps to address
Based on the assessment, identify the 2–3 most important gaps to close before or alongside AI adoption.
| Gap | Dimension | Recommended action | Owner | Timeframe |
|---|---|---|---|---|
Recommended adoption approach
Based on the overall profile, which adoption approach is most appropriate?
- Immersive — Broad adoption across multiple tools and workflows simultaneously. Appropriate only for high-readiness (3.5+) organisations with strong culture and skills.
- Sequenced — Phase adoption in by workflow or team, starting with highest-readiness areas. Most appropriate for 2.5–3.4 range.
- Pilot-first — Run a small-scale pilot with one team, one workflow, or one tool before broader rollout. Appropriate for 1.5–2.4 range.
- Build-then-adopt — Prioritise foundation-building (skills, process, governance) before significant tool adoption. Appropriate below 1.5.
Recommended first 90 days: [Specific, sequenced actions for the first three months based on this assessment]
AI prompt
Base prompt
I'm conducting an AI readiness assessment for a communications function and want to analyse what I've found and turn it into a clear set of recommendations.
Organisation: [NAME AND SECTOR]
Team size: [NUMBER]
Current AI usage: [DESCRIBE: what tools if any are already in use]
Assessment findings:
Technology: [SCORE AND KEY NOTES]
Skills: [SCORE AND KEY NOTES]
Workflow: [SCORE AND KEY NOTES]
Governance: [SCORE AND KEY NOTES]
Culture: [SCORE AND KEY NOTES]
Key context:
[DESCRIBE: any important context about the organisation — past change history, leadership priorities, constraints]
Please:
1. Interpret the readiness profile — what does this combination of scores typically indicate about AI adoption potential?
2. Identify the 2–3 highest-priority starting points for AI adoption given this profile
3. Identify the 2–3 gaps that present the most risk if unaddressed
4. Recommend a 90-day adoption plan that is realistic given what the assessment reveals
5. Draft a 200-word summary of findings I can present to leadership
Be direct. If the organisation isn't ready for ambitious AI adoption, say so. Gradual, realistic progress is better than overpromising and failing.
Prompt variations
Variation 1: Client diagnostic summary
I've just completed an AI readiness assessment for a client in [SECTOR]. Here are the findings:
[PASTE ASSESSMENT SCORES AND KEY NOTES ACROSS ALL FIVE DIMENSIONS]
The client's leadership is expecting ambitious AI adoption; the assessment reveals a more cautious picture, particularly around [key gaps].
Please help me:
1. Draft an executive summary that is honest about the gaps without being unnecessarily deflating
2. Frame the recommended approach as "building for sustainable success" rather than "you're not ready"
3. Identify the 2–3 things that, if addressed, would most significantly change the readiness picture
4. Draft a narrative for the recommended 6-month plan that is credible and genuinely achievable
I need to manage expectations while maintaining confidence in the programme.
Variation 2: Strengths-based starting point analysis
Based on this AI readiness profile:
[PASTE SCORES]
Which specific AI use cases are most likely to succeed given the organisation's particular strengths? I want to start where we're most likely to generate quick wins that build momentum and confidence — not where the theory says we should start.
For each recommended use case:
1. Why does this organisation's readiness profile make it a good fit?
2. What specific tool or approach would you suggest?
3. What would success look like in 30 days?
4. What's the most likely obstacle and how to manage it?
Variation 3: Pre-assessment interview questions
I'm about to conduct AI readiness assessment interviews with 5 people at a communications organisation: the Communications Director, a senior manager, two content producers, and an in-house lawyer who handles comms compliance.
The assessment covers technology, skills, workflow, governance, and culture.
Please draft:
1. 5 questions suitable for all interviewees (general readiness picture)
2. 3 additional questions specifically for the Communications Director (leadership and strategy)
3. 3 additional questions specifically for the content producers (practical day-to-day AI usage)
4. 3 additional questions specifically for the lawyer (governance and risk)
Questions should be open-ended and exploratory, not yes/no. I want to hear what people actually think, not confirm assumptions.
Human review checklist
- Scores are honest: The assessment reflects genuine current state, not aspirations or how people want to be seen
- Multiple perspectives included: Scores aren’t based solely on leadership’s view — practitioner input is reflected
- Gaps aren’t minimised: Areas scoring 1–2 are flagged prominently, not buried in the notes
- Starting points are realistic: The recommended starting points genuinely match the readiness profile — not just the most exciting use cases
- 90-day plan is achievable: The first-phase recommendations can actually be done with the resource available
- Culture section is candid: Culture assessments are the most frequently inflated dimension; check that the scores reflect genuine team attitudes, not hopes
- Governance gaps addressed: Any governance gaps are treated as a prerequisite or parallel workstream, not an afterthought
- No dimension ignored: All five dimensions are assessed, even the uncomfortable ones
- Comparison context: If this is a repeat assessment, comparison to previous scores is included
- Clear next steps: The assessment ends with specific actions and owners, not just observations
Example output
AI Readiness Assessment Organisation: Meridian Communications Ltd | Assessed by: Faur Consulting Date: April 2026
Readiness profile
| Dimension | Score /28 | Average /4 |
|---|---|---|
| Technology | 22 | 3.1 |
| Skills | 14 | 2.0 |
| Workflow | 18 | 2.6 |
| Governance | 12 | 1.7 |
| Culture | 24 | 3.4 |
| Total | 90/140 | 2.6 |
Overall: Ready to Begin
Summary
Meridian has a strong cultural and technology foundation for AI adoption — leadership is engaged, the team is motivated, and the tech stack is reasonably organised. The primary obstacles are skills and governance: most team members have used AI tools casually but lack the structured prompt literacy and critical evaluation skills to use them reliably in professional contexts, and there are no governance guardrails for AI use in client-facing work.
Recommended approach: Sequenced adoption. Begin with internal productivity tasks (research synthesis, first-draft briefings) where governance risk is lower, while running parallel workstreams on prompt training and AI use policy development. Delay client-deliverable AI use until governance is in place.
First 90 days:
- Commission a structured prompt literacy training for all content staff (weeks 1–4)
- Draft AI use policy with legal and leadership (weeks 2–6)
- Pilot AI-assisted research and monitoring workflows in the intelligence team (weeks 4–12)
- Review pilot; extend to content team (weeks 10–12)
Related templates
- Comms Workflow Audit — Drill into specific workflows identified as AI integration opportunities from this assessment
- Capability Gap Analysis — Assess individual and team skills gaps in detail (expands the skills dimension of this assessment)
- AI Tool Evaluation Framework — Evaluate specific tools once you know which workflows to prioritise
- Objectives & Measurement Framework — Set measurable objectives for the AI adoption programme
- Quarterly Comms Review — Use to track capability improvements over time
Tips for success
Don’t let the assessment become aspirational The most common failure mode is scoring how you want to be, not how you are. Every inflated score creates a false foundation for the adoption plan. If in doubt, score lower and let progress prove otherwise.
Use external input to validate self-assessments People and organisations are poor judges of their own readiness, particularly on culture (we tend to think our culture is more change-ready than it is) and skills (we tend to overestimate our own AI literacy). If conducting a self-assessment, try to validate at least two dimensions with external input.
The culture dimension is the hardest to fix Technology, skills, and process can all be addressed with investment and time. Culture is slower and harder to shift. If the culture dimension is low — particularly if there is active resistance — be honest about the timeline implications. Forcing adoption into a resistant culture accelerates both failure and trust erosion.
Treat this as the start of a conversation, not a final verdict The assessment scores aren’t a judgement on the organisation’s competence. They’re a map of the current state. Used well, they create a shared, honest starting point for an improvement conversation. Used poorly, they become a point-scoring exercise that generates defensiveness.
Repeat it The full value of this assessment comes from comparison over time. Run it at the start of an AI programme, and run it again 6 months later. The change in scores is more informative than any single assessment.
Common pitfalls
Treating all dimensions as equal Different organisations are limited by different dimensions. A technology-poor organisation is more constrained by its tech infrastructure than its culture. A culturally resistant organisation is more constrained by its people than its tools. Understand which dimension is the actual limiting factor and focus there.
AI readiness as a one-off project Readiness isn’t binary — you’re never “fully ready.” It’s a continuing journey of capability building. Framing AI readiness as a project with a completion date misses the point. The assessment should be a recurring diagnostic, not a box to tick.
Underestimating the governance dimension Governance is the most commonly underweighted dimension in self-assessments, perhaps because it feels bureaucratic. But AI-generated content published without appropriate oversight creates real reputational, legal, and accuracy risk. Communications teams that adopt AI without governance frameworks are making a gamble that the tool will work perfectly every time. It won’t.
Forgetting that readiness varies by task A team may be ready to use AI for internal research synthesis but not ready to use it for external-facing client communications. Readiness isn’t uniform across all use cases. The assessment should help identify where readiness is highest, and adoption should start there.
Related templates
Need this implemented in your organisation?
Faur helps communications teams build frameworks, train teams, and embed consistent practices across channels.
Get in touch ↗