Buying AI in government without buying regret

Public agencies are under pressure to “do AI.” The risk is buying tools that demo well, then stall in delivery, governance, and adoption. The answer is outcomes-first procurement with portability, evidence trails, and release-by-release controls built in.
The question behind this piece
AI vendors can show compelling demos in days. Government value is realized over months through delivery, process change, and operational adoption. Traditional procurement often rewards polish, not outcomes, and it struggles to manage data rights, model change, and accountability once the contract is signed. How should public-sector teams procure AI so value is measurable, lock-in is minimized, and delivery governance is strong enough to scale?
Why this matters now
Over the last couple years, capability became accessible enough that almost every software vendor can claim “AI-powered.” Selection got harder because the noise floor rose. Many public institutions are responding with clearer internal guidance on responsible GenAI use, but procurement still needs a practical way to distinguish real workflow impact from marketing.
Scrutiny has also increased. AI decisions attract attention from privacy offices, internal audit, labor groups, media, and citizens. When projects stall, the narrative becomes waste, not experimentation. Governments in multiple jurisdictions are now pushing for more transparency and governance around AI use, including in procurement.
Finally, AI behaves differently than a fixed software purchase. Prompts, policies, and models change. Data pipelines and monitoring matter more than the interface. If your contract assumes the system is static, you pay twice: once for the tool, and again for the rework needed to make it safe and adoptable.
In AI procurement, the contract becomes the operating model.
Our perspective
Government should procure AI the way it should procure any high-uncertainty capability: define outcomes, prove feasibility quickly on real workflows, contract for delivery, and embed governance and evidence trails from day one. The goal is to buy results, not a platform.

Start with an outcome brief, not a requirements list. A strong AI outcome brief includes:
- The workflow and user group (case workers, contact center agents, inspectors).
- The baseline (cycle time, rework rate, backlog, cost per case, error rate).
- The target improvement range and constraints (privacy, accessibility, bilingual service).
- The adoption requirement (what changes in the operating process, not just the UI).
Then select based on proof, not promises. The most reliable pattern is a two-stage model:
I. Discovery and feasibility sprint: demonstrate value on real or representative data, with clear evaluation criteria and security constraints.
II. Scaled delivery phase: only a vendor that proves feasibility moves into implementation, with benefits realization and change management built into scope.
This matches how many governments are already thinking about AI: risk assessment, documented mitigations, and controls that scale with impact, not one-time signoff.
Contracting needs to reflect AI’s realities. The clauses that matter most are not exotic. They are the ones teams regret not having:
- Data rights and portability: explicit rights to use, retain, and migrate data, including derived artifacts created for the solution.
- Model and prompt change governance: how changes are approved, tested, documented, and monitored release by release.
- Performance tied to outcomes: workflow KPIs with measurement methodology defined, not uptime alone.
- Exit-ready obligations: modular integration, documented interfaces, and clear offboarding requirements.
To reduce lock-in, avoid bundling everything into one opaque platform decision before value is proven. Buy in modules aligned to the value chain: data access, workflow layer, model services, evaluation, and monitoring. If you need a prime vendor, require transparency on subcontracting and portability obligations.
Delivery governance is the final control. A practical AI delivery model includes:
- A named product owner with authority over workflow decisions.
- A joint cadence with weekly metrics review and issue resolution.
- A privacy and model-risk review gate tied to each release, not a one-time event.
- A benefits realization plan with baseline, targets, measurement, and evidence storage.
If you want a simple starting playbook, use three gates:
Gate 1: Feasibility (does it work on real workflows and data?)
Gate 2: Operability (can we run it safely, with monitoring and controls?)
Gate 3: Adoption (will staff use it, and will metrics move sustainably?)
Buy outcomes with proof gates, and you avoid both pilot purgatory and vendor dependency.
Strathen Group offers an Outcomes-First AI Procurement Blueprint: a timeboxed engagement that produces an outcome brief, vendor evaluation rubric, model-risk and privacy gates, and contract-ready clauses for data rights, change control, and exit readiness. If you are planning an AI buy in Alberta or Canada, contact Strathen Group to scope a feasibility sprint and a delivery pilot for one high-value workflow.





