Back to Insights
Perspective
5 mins read

Leverage AI agents as operator co-pilot, not autopilot

Industry

Energy & Utilities

Capabilities

Operational Excellence
GenAI & Agents
Data & Analytics

Signals of impact

  • Faster triage of critical alarms and incidents in plants and remote facilities.

  • More consistent first-response steps grounded in SOPs and past events.

  • Lower mean time to repair with fewer unnecessary escalations to senior engineers.

How we help
We design supervised operator co-pilots, including workflow redesign, guardrails, and the evidence trail required to scale safely.

A supervised operator co-pilot can turn alarms, SOPs, and incident history into clear next steps, without bypassing safety systems or operator judgment.

The question behind this piece

Field operations often run on a mix of alarms, habits, and local knowledge. A compressor trips, a pump vibrates, screens light up, and operators reach for manuals or memory. Safety is protected, but response quality can still depend on who is on shift and what they have seen before.

A different pattern is now possible: a supervised agent that sits beside the operator, watches alarms and key readings, retrieves the relevant playbook, checks related history, and drafts the next best steps for a human to confirm. The question is not whether the technology exists. It is how to design the co-pilot so it is safe, auditable, and useful in the moments that matter.

Why this matters now

Experienced operators are retiring, and newer hires often have less plant-specific pattern recognition. At the same time, control systems, historians, maintenance tools, and safety logs generate more signals than any individual can synthesize under pressure.

Scrutiny has also increased. After an incident, leaders need to explain what the operator saw, what guidance was available, what actions were taken, and why. “We followed procedure” is not enough without a clear evidence trail.

Finally, modern language models and retrieval methods can now connect live conditions to SOPs and event history in near real time. The constraint is no longer capability. The constraint is disciplined workflow design, control room fit, and governance that does not undermine safety culture.

In high-risk operations, the goal is not autonomy. It is raising the floor of first response.

Our perspective

The highest-value agent in oil and gas field operations is not an autonomous controller. It is an operator co-pilot that observes, retrieves, and proposes, while humans still decide and execute. “Agentic” should mean the system can monitor conditions, compare to known patterns, and recommend actions, not take action.

Two layers must be designed together.

  1. The operational knowledge layer

    This layer encodes how the plant is supposed to behave and how teams respond when it does not. It brings together equipment hierarchy, alarm philosophy and prioritization rules, SOPs and emergency playbooks, and incident and maintenance history for critical assets. The goal is not a perfect digital twin. It is a reliable, versioned source of operational guidance the co-pilot can cite.

    The most important design choice is traceability. Recommendations must be tied back to the exact SOP section, alarm rule, or historical precedent that informed them. If the co-pilot cannot show its sources, it will not survive audit, and operators will not trust it on a bad day.
  2. The co-pilot layer

    This is the advisory agent that orchestrates retrieval and drafts the operator-facing output. The pattern should be simple: observe the current alarms and readings, retrieve the relevant guidance and similar past events, and propose a ranked set of next steps with explicit confidence and escalation flags.

    Outputs that work in practice are concise and operational: what to focus on first, what might be happening, the first-response checklist pulled from SOPs, and what requires supervisor approval or cross-team coordination. The co-pilot should never “invent procedure.” It should assemble guidance from approved sources into a context-specific response.

    Governance and onboarding matter as much as prompts: keep the co-pilot read-only to operational data, start with narrow scope (one asset class or alarm family), keep every recommendation advisory, and log what the system saw and proposed, plus the operator’s action and override. Fit matters too. If the tool lives outside the console and uses unfamiliar language, it will be ignored or used informally.
Using AI agents as operator co-pilots with guardrails

A practical starting path

A pragmatic path is to prove value in one high-frequency, high-consequence scenario, then expand.

Pick one use case with clear value. Compressor trips, pump vibration alarms, or instrument air issues are common starting points. Map the current workflow, decision points, and where delays and rework occur. Curate the minimum viable knowledge set. Collect the relevant SOPs, alarm matrices, incident reports, and maintenance logs. Clean enough so steps and conditions are explicit. Version it. Pilot in advisory mode, and measure usefulness. Track time to first correct action, missed procedural steps, escalation frequency, and operator feedback. Use incident reviews to refine retrieval, wording, and escalation logic. Expand only after the co-pilot is consistently helpful and predictable.

The aim is not a self-directed plant. It is a supervised co-pilot that makes best practice easier to follow, especially on the days when experience is scarce and pressure is high.

The safest agent is the one that improves human decisions without hiding how it reached its advice.

Strathen Group can run a structured working session with operations, reliability, OT, safety, and risk leaders to pressure-test the concept before any deployment.

Typical output from an engagement can include:

  • A use-case shortlist and selection logic tied to risk, frequency, and value.
  • A reference workflow showing where the co-pilot advises and where humans decide.
  • A knowledge and evidence design: sources, versioning, citation rules, and logging requirements.
  • A guardrails and escalation model: prohibited actions, confidence thresholds, and stop conditions.
  • A pilot plan and measurement scorecard to prove value and safety before scaling.

We can then provide light ongoing support through the pilot and early scale to help refine the operating cadence, improve adoption, and keep the evidence trail audit-ready as usage grows.

Bhuvan Maingi

Managing Partner, Strathen Group

Subscribe for concise, executive-ready insights from Strathen Group

By subscribing, you agree to receive emails from Strathen Group. You can unsubscribe at any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.