Build a board-ready AI plan in 15 minutes
Assess your maturity, choose from 11 best practices across 5 categories, add your context, and export a strategic roadmap.
Already started? Resume your plan
AI Adoption Framework
Define your AI vision, build executive alignment, and create a roadmap.
AI Vision, Strategy & Business Alignment with Executive Sponsorship
I am a big fan of Lee Bolman's quote, "A vision without a strategy remains an illusion". It is absolutely crucial for an organisation to define what AI means for your organisation – not as technology, but as a business capability that accelerates your top priorities and aligns to your objectives and then start to understand your guardrails, tolerance limits and operational models. It is key to have an executive sponsor that ensures the strategy is driven into the business.
📝 Why This Matters
Organisations that treat AI as a business strategy rather than a technology initiative and have an executive sponsor ensuring it gets driven into the business will achieve much wider adoption and thus a stronger RoI. Without alignment, departments pursue conflicting goals, budgets are wasted on low-impact experiments, and leadership loses confidence. In addition, guardrails, operational models, and general support do not always get implemented proactively, usually more reactively. The vision anchors every subsequent decision – what to buy, what to build, where to invest, and critically, what to say no to. A one-page AI vision document that any board member can understand in five minutes is more valuable than a hundred-page strategy document that nobody reads.
📝 Detailed Guidance
Start by mapping your top 3β5 business priorities: revenue growth, cost reduction, customer experience, operational efficiency, risk reduction. For each, identify where AI could accelerate outcomes – not where AI is interesting, but where it solves a real constraint. Pressure-test each opportunity against four questions: Is the data available and accessible? Is the process repeatable and well-understood? Is the impact measurable with a clear KPI? Would a senior leader sponsor and champion this initiative? The output is a prioritised pipeline of AI opportunities scored on impact versus urgency, with clear ownership and timelines.
✅ Key Actions
- ☐ Define a one-page AI vision statement linked to your top 3β5 business priorities
- ☐ Map AI opportunities to measurable KPIs (revenue, cost, CX, risk reduction)
- ☐ Create a prioritised AI initiative pipeline with owners and timelines
- ☐ Add a section on Data Governance and foresight into improvements possibly needed
- ☐ Add a section on Operational models and what will likely be required
- ☐ Present the AI business case to the board or executive committee
- ☐ Communicate the vision to all staff β explain the 'why' before the 'what'
- ☐ Review and refresh the vision quarterly as capabilities and priorities evolved
💡 Common Pitfalls
Starting with 'what AI can do' instead of 'what problems we need to solve.' Building a 50-page AI strategy document nobody reads. Letting IT own the vision without business unit input. Failing to define what success looks like before starting. Trying to boil the ocean – focus on 3β5 high-impact use cases, not 30. Forgetting guardrails and operational models – adopt AI responsibly!
AI Governance Board
Establish a cross-functional governance body with real authority over AI adoption decisions, risk acceptance, and budget allocation. Without governance, AI becomes ungovernable shadow IT at scale and in the world of GDPR, where many operate many AI products that process data away from the UK and EU, which can prove troublesome for many, as being able to assess for data processing is key, along with bias assessments, security and ethical risks all being considered, mapped, managed and measured.
📝 Why This Matters
AI governance isn't bureaucracy, it's the mechanism that enables fast, confident decisions. Without it, every AI request becomes an ad-hoc debate between IT, legal, and the business – and more often the people having to use it! With it, you have clear criteria, delegated authority, and a consistent risk approach. The governance board owns the AI policy, approves high-risk deployments, handles escalations, reviews incidents, and maintains the organisation's overall AI risk posture. It's also the body that says 'yes' quickly to low-risk initiatives, governance should accelerate adoption, not slow it down. It's all about scaling "Yes" safely!
📝 Detailed Guidance
Form a board with representatives from IT/Engineering, Legal/Compliance, HR, Data/Privacy (DPO), Finance, and key business units. Now many organisations will already have Technical Design Authorities, Business Design Authorities, Security Authorities and much more – it is common to try and weave the AI Governance board into once of these but for now my advice would be to keep it independent as confidence in AI builds and as skills within the business increases and it becomes more of a business as usual engagement. The CTO, CDO, or a dedicated AI lead should chair. Meet monthly minimum, although we are seeing many boards meeting weekly(!), with emergency sessions for high-risk decisions. It is important that certain decisions feedback to either an approved list of usage in your AI Policy or approved patterns within a knowledgebase.
✅ Key Actions
- ☐ Form cross-functional AI governance board (IT, Legal, HR, Data, Finance, Business Units)
- ☐ Appoint a chair (CTO, CDO, or dedicated AI lead)
- ☐ Define board charter with decision authority, meeting cadence (weekly/monthly), and escalation triggers
- ☐ Ensure a process of pushing back to AI Policy or Knowledgebase where appropriate
- ☐ Draft and publish an AI Acceptable Use Policy for all staff
- ☐ Establish AI incident escalation and response procedures
- ☐ Set up quarterly AI risk posture reviews
- ☐ Publish governance decisions and rationale to build organisational trust
💡 Common Pitfalls
Making the board too large (8+ members slows every decision). Not giving it actual authority (advisory-only boards get ignored). Applying the same heavyweight process to a simple Copilot use case as to a customer-facing AI agent. Not including business users, governance becomes disconnected from reality. Not communicating it widely with the business. Not having appropriate tooling to help guide individuals/groups bringing items (even an Excel template will do!) Meeting too infrequently, monthly is the absolute minimum.
AI Tool Evaluation Framework – part of the AI Governance Board
A standardised process for evaluating any AI tool before it enters the organisation, covering; application, security, compliance, build, data, bias, Red Teaming, cost, vendor lock-in, and business value. Every AI request goes through this framework. This should feed the AI Governance Board.
📝 Why This Matters
Without a framework, AI tool selection is driven by whoever shouts loudest or demos best. You end up with overlapping tools, unassessed risks, vendor lock-in, and no consistent view of what AI is deployed across the organisation. A consistent evaluation framework ensures every AI tool is assessed against the same criteria before money is spent or data is shared providing the AI Governance Board with a consistent approach to assessment.
📝 Detailed Guidance
Build a an approach to assessment. For example a scoring matrix covering 9 dimensions, each scored: (1) Security and data handling – where does data go, who can access it, is it used for training? (2) Compliance and regulatory; GDPR, EU AI Act classification, sector-specific regulations. (3) Bias and fairness – has or how will the tool been evaluated for bias, are evaluation results published? (4)Red Teaming – has or how will the solution have Red Teaming conducted. (5) Cost and commercial terms; total cost of ownership including licences, API usage, training, and support. (6) Integration complexity; how does it fit with your existing stack? (7) Vendor lock-in risk – can you migrate away, is the data portable? (8) Business value and use case fit, does it solve a real, prioritised problem? (9) Sustainability and environmental impact,4 energy usage, carbon footprint. Set minimum thresholds: Security must score 4+, Compliance must score 3+. Any tool scoring below thresholds is rejected regardless of other scores. Include a mandatory week pilot phase with predefined success criteria before enterprise rollout.
✅ Key Actions
- ☐ Create an AI tool evaluation scoring matrix covering 8 dimensions
- ☐ Set minimum threshold scores for security (4+) and compliance (3+)
- ☐ Define a mandatory pilot phase (e.g. 4 weeks) with written success criteria
- ☐ Build a procurement checklist for AI vendors (data handling, SLAs, exit terms)
- ☐ Establish a central AI tool registry to track everything deployed
- ☐ Review and update the framework annually as the landscape evolves
- ☐ Revisit solutions to ensure no scope creep or graduation of the original solution
💡 Common Pitfalls
Making the evaluation so heavy that teams bypass it entirely and use shadow AI. Not including a 'fast track' for low-risk, pre-approved tool categories that may have a slightly different pattern. Evaluating only technical capabilities while ignoring data governance, bias, and exit terms. Letting vendor sales teams drive the evaluation timeline and criteria. Not providing value in the process i.e. providing feedback for improvement areas.
AI Ethics Policy & Principles
Define your organisation's AI ethics principles; fairness, transparency, accountability, safety, and human oversight. These principles guide every AI decision and should be public-facing where appropriate so your customers/consumers know you are adopting AI Responsibly.
📝 Why This Matters
Ethics principles aren't just a compliance exercise; they're the foundation that builds trust with employees, customers/consumers, and regulators. Without clear principles, teams make inconsistent ethical judgments, creating reputational and legal risk. Published principles also demonstrate maturity to customers and partners evaluating your AI practices. I have found in practice, ethical priorities and risk tolerances differ across sectors. For example, charities, gambling, and defence organisations each operate within distinct societal expectations and regulatory environments. Your ethical framework should therefore be applied in a way that reflects your organisationβs context and risk profile, while maintaining a consistent commitment to core principles such as fairness, accountability, and transparency.
📝 Detailed Guidance
Start with five core principles: (1) Fairness, AI should not discriminate or create unjust outcomes. (2) Transparency, users should know when they're interacting with AI, and AI decisions should be explainable. (3) Accountability – there must always be a human accountable for AI outcomes. (4) Safety, AI should not cause harm, and there should be mechanisms to stop it if it does. (5) Privacy, AI should respect data protection rights and minimise data usage. For each principle, define what it means in practice with specific examples. Publish the principles internally and externally. Train all AI users on them. Reference them in the AI Acceptable Use Policy.
✅ Key Actions
- ☐ Define 5 core AI ethics principles (fairness, transparency, accountability, safety, privacy)
- ☐ Write practical guidance for each principle with real examples
- ☐ Publish principles internally to all staff
- ☐ Consider publishing externally on your website if appropriate
- ☐ Reference principles in the AI Acceptable Use Policy
- ☐ Include ethics principles in all AI training programmes
- ☐ Review principles annually against emerging AI ethics frameworks
Entra ID: Identity & Access
Copilot and all AI tools inherit user permissions via Microsoft Graph. If identity controls are weak, AI will surface content to the wrong people. MFA, Conditional Access, and privileged role management are non-negotiable prerequisites.
📝 Why This Matters
Every AI interaction is bound by the signed-in user's permissions. Copilot and other tools within the Microsoft sphere can only access what the user can access, but most organisations have accumulated years of permission sprawl. A user in Marketing might have lingering access to an HR SharePoint site from a cross-functional project two years ago. Without MFA, a compromised account means an attacker has AI-powered search across everything that user can see. Identity is a critical foundational control for AI safety.
📝 Detailed Guidance
Phase 1- MFA: Enforce MFA for all users via Conditional Access. If you don't have Entra ID P1/P2, enable Security Defaults (free, enables MFA for all users). Create a break-glass emergency access account excluded from all Conditional Access policies. Phase 2 – Conditional Access: Create policies targeting the Office 365 cloud app. Require compliant or Entra-joined devices. Block access from untrusted locations. Consider requiring device compliance specifically for Copilot. Start all new policies in Report-Only mode for 2 weeks before enforcing. Phase 3 – Privileged Roles: Audit all Global Admin, SharePoint Admin, Exchange Admin, and Teams Admin assignments. Use Privileged Identity Management (PIM) for just-in-time elevation (requires Entra ID P2). Remove all standing admin role assignments. Phase 4 – Access Reviews: Set up quarterly recurring Access Reviews for all Microsoft 365 Groups that control access to sensitive SharePoint sites, Teams, and applications. Configure auto-removal of unconfirmed access after the review period. Important to note Agent ID will be released soon in Entra ID and will form part of the guidance in this framework.
📁 Microsoft Admin Paths
MFA/CA : Entra Admin Centre - Protection - Conditional Access - New Policy
Roles : Entra Admin Centre - Identity - Roles and administrators - Review assignments
PIM : Entra Admin Centre - Identity Governance - Privileged Identity Management
Access Reviews : Entra Admin Centre - Identity Governance - Access Reviews - New Review
✅ Key Actions
- ☐ Enforce MFA for all users via Conditional Access (or Security Defaults)
- ☐ Create a break-glass emergency access account excluded from all policies
- ☐ Create Copilot-specific Conditional Access policies (device compliance, trusted locations)
- ☐ Start all Conditional Access policies in Report-Only mode for 2 weeks before enforcing (avoid single catch all policies)
- ☐ Audit all privileged role assignments β target zero standing Global Admins
- ☐ Enable Privileged Identity Management (PIM) for just-in-time admin access
- ☐ Set up quarterly Entra ID Access Reviews for all sensitive M365 Groups
- ☐ Configure auto-removal for unconfirmed access after review deadline
💡 Common Pitfalls
Enabling MFA without communicating to users first (helpdesk flood on day one). Not creating a break-glass account (getting locked out of your own tenant). Applying Conditional Access without Report-Only testing (blocking legitimate users). Forgetting that admin accounts have broader Microsoft Graph access than standard users, a compromised admin account is catastrophic with Copilot enabled.
Purview & SharePoint: Oversharing Discovery & Remediation
Before enabling M365 Copilot, identify and fix data exposure risks across SharePoint, OneDrive, and Teams. AI solutions like M365 Copilot is a megaphone for crap data governance, it proactively surfaces overshared content. It is one of those beautiful things about AI – it highlights and makes companies do the work they should have done ages ago! I have seen a huge increase in internal data leaks due to this.
📝 Why This Matters
M365 Copilot doesn't just let users stumble onto overshared content, it proactively surfaces it in response to queries back to my megaphone comment for bad data practices…. A finance report shared with 'People in your organisation' two years ago is now one M365 Copilot prompt away from anyone in the company. A draft press release on a public SharePoint site becomes accessible to every M365 Copilot user. Oversharing remediation is the single highest-impact M365 Copilot readiness activity and you should NOT roll out M365 Copilot until you have remediated these issues!
📝 Discovery Phase
Step 1: Use SharePoint data access governance and sharing reports to identify sites with βAnyoneβ links, organisation-wide access, external sharing, and large permission footprints. Step 2: In Microsoft Purview, use DSPM for AI to identify overshared files, external access, broad internal access, and missing sensitivity labels. Expand coverage with custom assessments where needed. Step 3: Use SharePoint Advanced Management to identify inactive sites, missing ownership, excessive sharing, and complex permission structures. Step 4: Audit public Microsoft 365 Groups and Teams. Public groups increase exposure because any user can discover and join them. M365 Copilot surfaces content based on user access, so overly broad permissions increase data exposure.
📝 Remediation Phase
Step 1: Initiate site access reviews for overshared sites. Site owners validate or revoke access within a defined timeframe (for example 14 days), with escalation for non-response. Step 2: Apply Restricted Access Control (RAC) to high-sensitivity sites. This enforces access so only a defined security group can access the site. Copilot surfaces content only to users with access. Step 3: Enable Restricted Content Discovery (RCD) where access should remain but discoverability should be reduced. This limits visibility in search and M365 Copilot and is useful as a temporary control. Step 4: Restrict broad access mechanisms such as βEveryone except external usersβ and organisation-wide sharing links at the tenant level to prevent new oversharing. Step 5: Archive or delete inactive sites and enforce ownership policies (minimum two owners). Archiving reduces exposure and participation in discovery experiences.
✅ Key Actions
- ☐ Run SharePoint sharing links and permission state reports
- ☐ Run Purview DSPM data risk assessment for full tenant
- ☐ Run SAM permissions state report across all sites
- ☐ Audit all public Teams and M365 Groups u2014 switch sensitive ones to private
- ☐ Initiate Site Access Reviews for all overshared sites with 14-day deadline
- ☐ Apply Restricted Access Control (RAC) on HR, Finance, Legal, Executive sites
- ☐ Enable Restricted Content Discovery (RCD) for sensitive project sites
- ☐ Disable 'Everyone except external users' (EEEU) at tenant level
- ☐ Clean up inactive sites (90+ days) via lifecycle management
- ☐ Set up Site Ownership policy requiring minimum 2 owners per site
- ☐ Re-run sharing reports after 30 days to measure remediation progress
💡 Common Pitfalls
Enabling M365 Copilot before running oversharing discovery (the damage is immediate and visible). Only checking the top 100 sites, oversharing often lives in smaller, forgotten sites. Not setting deadlines for Site Access Reviews (they never get completed without a deadline). Forgetting OneDrive, users share OneDrive folders with 'Anyone' links too. Not re-running reports after remediation to verify progress.
Purview: Classifications, Sensitivity Labels & DLP for Copilot
Once data has been discovered in Purview, the next step is to classify it using sensitive information types and classifiers, then apply sensitivity labels to define its level of protection. These labels, combined with DLP policies, govern how data can be accessed, shared, and used across Microsoft 365, ensuring that M365 Copilot only surfaces content users are permitted to see. Labels protect the future; oversharing remediation fixes the past reducing those disastrous internal data leaks
📝 Why This Matters
Oversharing remediation fixes historical permission issues, where sensitivity labels protect ongoing content creation, ensuring you are not continuously having to run governance reviews of historical data (although it is still best practice to check!). However, if data is correctly labelled but still broadly accessible, it can be surfaced by M365 Copilot because it operates within the userβs existing permissions. DLP policies and label-based protections provide an additional control layer by restricting how sensitive data can be accessed, shared or processed, reducing the risk of inappropriate exposure through AI. This forms your ongoing, automated protection layer.
📝 Detailed Guidance
Step 1: Create Labels Build a sensitivity label taxonomy aligned to business risk. A simple traffic-light model works well: Public (green) No restrictions Internal (yellow) Organisation-only, default for most content Confidential (amber) Restricted access, encryption applied Highly Confidential (red) Encrypted, access tightly controlled, no external sharing, strongest restrictions applied Step 2: Publish and Default Publish labels to all users via a label policy. Set Internal as the default label for new content Require labelling where possible (e.g. Office apps, containers) This ensures content is consistently classified from creation. Step 3: Library Defaults Apply default sensitivity labels at the container level for critical locations. Examples: HR libraries = Confidential Finance libraries = Confidential Legal libraries = Highly Confidential This enforces protection even when users do not manually label content. Step 4: DLP for AI and M365 Copilot Create DLP policies in Purview covering Microsoft 365 locations and AI interactions. Rule 1: Restrict access and sharing of content labelled Highly Confidential Rule 2: Detect sensitive information types (e.g. national insurance numbers, financial data, health data) and block or warn on inappropriate use These policies act as guardrails, reducing the risk of sensitive data being surfaced or misused, including within Copilot experiences. Step 5: Auto-labelling Implement auto-labelling to scale classification. Use: Sensitive information types Trainable classifiers Approach: Start in simulation mode for 2 weeks Review matches and false positives Move to enforcement This ensures consistent labelling without relying solely on user behaviour.
✅ Key Actions
- ☐ Create sensitivity label taxonomy (e.g. Public, Internal, Confidential, Highly Confidential)
- ☐ Configure Highly Confidential with encryption, access restrictions, and strongest protection controls
- ☐ Publish labels to all users with Internal as the default label
- ☐ Set default sensitivity labels on HR, Finance, Legal document and other libraries
- ☐ Create DLP policies restricting access, sharing and use of Highly Confidential content
- ☐ Create DLP rules detecting sensitive information types (e.g. NI numbers, financial data, health data) and blocking or warning on inappropriate use
- ☐ Create auto-labelling policies for sensitive information types and trainable classifiers
- ☐ Run auto-labelling in simulation mode for 2 weeks, review results, then enforce
- ☐ Enable Activity Explorer and DSPM to monitor sensitive data usage and exposure
- ☐ Train all users on the labelling model as part of the AI literacy programme
Purview: Audit & Compliance Monitoring
Once data is classified and protected, Microsoft Purview provides monitoring, audit and compliance capabilities to track how data is accessed, used and shared. This enables organisations to observe user behaviour, detect risk, and validate that protection controls are operating effectively so they can be adjusted if needed. In AI-enabled environments, it ensures interactions with tools such as Microsoft 365 Copilot remain governed and accountable.
📝 Why This Matters
Monitoring and audit provide assurance that controls are working in practice, not just by design. While M365 Copilot operates within user permissions, its value depends on access to organisational data, meaning any oversharing or misuse can still be surfaced through AI interactions. Purview audit and monitoring capabilities provide visibility into user activity and underlying data access, allowing organisations to detect inappropriate behaviour, investigate incidents, and demonstrate compliance enabling highlights of gaps quickly. This creates a continuous, evidence-based protection layer for AI adoption.
✅ Key Actions
- ☐ Enable Unified Audit Log across Microsoft 365
- ☐ Enable Activity Explorer for sensitivity labels and DLP visibility
- ☐ Configure DSPM dashboards to identify oversharing and sensitive data exposure
- ☐ Configure DLP alerting for sensitive data access and policy violations
- ☐ Define alert thresholds for high-risk activities (e.g. mass downloads, unusual access patterns)
- ☐ Implement Insider Risk Management policies for data exfiltration and misuse scenarios
- ☐ Correlate M365 Copilot usage events with underlying data access logs for investigation
- ☐ Integrate Purview alerts with SOC tooling (e.g. Sentinel)
- ☐ Establish monitoring and review cadence (e.g. weekly operational review, fold into your monthly governance board)
- ☐ Define investigation, escalation and incident response procedures
- ☐ Align monitoring outputs to compliance frameworks (e.g. GDPR, ISO 27001, ISO 42001)
- ☐ Train security, compliance and governance teams on interpreting audit and monitoring signals
AI Red Teaming
Proactively test AI systems for vulnerabilities β prompt injection, data leakage, bias in outputs, hallucination on your organisation's data, and jailbreak attempts. Quarterly internal testing, annual external assessment.
📝 Why This Matters
AI systems have unique attack surfaces that traditional penetration testing doesn't cover. Prompt injection can trick AI into revealing content from other users' files. Creative prompting can extract sensitive data that DLP policies might miss. Bias testing on your specific organisational data reveals issues that generic benchmarks won't catch. Red teaming finds these vulnerabilities before users, attackers, or regulators do.
📝 Detailed Guidance
Establish a quarterly internal red teaming programme covering five areas: (1) Prompt injection, can you trick Copilot into revealing content from other users' files, bypassing permissions, or ignoring system instructions? (2) Data leakage, can you extract sensitive data (salary information, personal data, confidential strategy documents) through creative, indirect prompting? (3) Bias testing, do Copilot outputs show gender, racial, cultural, or other biases when operating on your organisation's specific data? (4) Hallucination testing, does Copilot fabricate facts about your organisation, make up policies that don't exist, or cite non-existent documents? (5) Jailbreak, can safety controls be bypassed to produce harmful, inappropriate, or policy-violating outputs? Document all findings with severity ratings (Critical, High, Medium, Low). Feed results into governance policies, DLP rules, and user training. Consider an annual third-party red team assessment for independent, expert-level validation.
✅ Key Actions
- ☐ Establish quarterly AI red teaming cadence
- ☐ Test prompt injection and permission bypass attempts
- ☐ Test sensitive data extraction via indirect prompting
- ☐ Assess output bias on organisation-specific data and scenarios
- ☐ Test hallucination rates on internal content (policies, procedures, facts)
- ☐ Attempt jailbreak and safety control bypass
- ☐ Document all findings with severity ratings
- ☐ Feed findings into DLP policies, governance rules, and training content
- ☐ Share anonymised findings with governance board
- ☐ Commission annual third-party red team assessment
AI Champions Network
Recruit AI champions per department with early access to new tools, advanced training, and a direct feedback channel to the governance board. Champions drive grassroots adoption through peer credibility.
📝 Why This Matters
Top-down mandates drive compliance. Peer advocacy drives genuine adoption. Champions are credible because they do the same job as their colleagues β when a champion in Finance demonstrates 2 hours saved per week on reporting, it's more persuasive than any executive presentation. Champions also serve as an early warning system for issues and a rich source of real-world use case ideas.
✅ Key Actions
- ☐ Recruit 1β2 AI champions per department (look for curious, respected, influential people)
- ☐ Give champions early access to new AI tools and features
- ☐ Provide Level 2+ training as a minimum, Level 3 for technical champions
- ☐ Establish a monthly AI champions community of practice
- ☐ Create a direct feedback channel from champions to governance board
- ☐ Have champions present use cases and tips at departmental team meetings
- ☐ Recognise and reward champion contributions visibly
- ☐ Give champions a role in new-joiner AI onboarding
Tailored Training & Collateral
Test text
Your plan is ready
Calculating your summaryβ¦
Here’s your summary β review gaps, then export.
MATURITY BY CATEGORY
SEE HOW YOU COMPARE TO OTHERS
Your scores vs. typical scores from organisations at a similar stage of AI adoption.
ⓘ Benchmarks based on aggregated self-assessments from organisations across sectors.
⚠ COVERAGE GAPS
Export Your Plan
Download a board-ready document with your maturity assessment, action plan, gap analysis, and 90-day quickstart β including your notes.
Fill in your details above to unlock the download. We won’t spam you.
✓ Your plan has been saved. Your reference code:
SHARE OR REVISIT YOUR PLAN
Bookmark this link to revisit your plan and re-assess maturity quarterly. Share it with colleagues to align your leadership team.
Come back in [month] to re-assess your maturity and track progress. We’ll email you a reminder with your current scores.