Every operations leader has a version of this spreadsheet: a list of internal tools the team has been requesting from engineering for nine months, with the last update being "we'll get to it after Q2". And every engineering lead has a version of this spreadsheet too: a list of internal tools they know would help ops, ranked below shipping the customer-facing roadmap.
Both lists are correct. Engineering can't afford to spend senior cycles on one-off ops tooling. Ops can't afford to wait three quarters for a refund console. The traditional answer was "hire a junior platform engineer", which mostly didn't happen. The 2026 answer is "ops ships it themselves with an AI builder, and engineering owns the parts that need to be owned".
The trick is knowing which parts go in which bucket. Here's the working framework.
The two failure modes
Most teams fail at internal tooling in one of two predictable ways:
- Underbuilding: ops makes do with Google Sheets, Retool patches, and SQL queries forwarded as screenshots. Volume grows, errors compound, audit becomes impossible. Six months later, an avoidable incident happens because access wasn't logged or someone hand-edited the wrong row.
- Overbuilding: ops gets engineering to build a "proper" internal tool. It takes ten weeks and behaves like a half-baked SaaS. By month three, ops has stopped using half the features and stuck a Google Sheet next to it for the part the tool got wrong.
The right answer is neither — it's fitting the tool to the lifecycle of the workflow. Some workflows live for a quarter and die. Some live for years. Build the right tool for each.
The four-question checklist
Before you decide whether to build with an AI builder or hand to engineering, answer these four questions for the specific workflow:
1. Will this workflow exist in six months?
If the workflow is responding to a specific market moment — fraud spike during the holidays, manual onboarding for the first 50 enterprise customers, refund processing during a payment-provider migration — it dies in a quarter. Build it as a Marcus tool. Throw it away when the moment passes.
If the workflow will be the same in two years — your billing reconciliation, your customer support back-office, your monthly close — engineering should own it. The lifetime maintenance cost will exceed the build cost; you want institutional ownership and a real codebase.
2. Does this touch production data with write access?
Read-only views, dashboards, search interfaces, audit reports, lookups for support — all of these are safe to build with an AI builder. The cost of getting them slightly wrong is low.
Write-access tools — refund console, fraud approval queue, account-merge tool, anything that mutates customer state — need real care. The right pattern in 2026: build the UI with an AI builder, but route every write through a thin engineering-owned API that validates, logs, and authorizes properly. Ops gets the tool same week; engineering owns the dangerous part.
The wrong pattern is letting an AI builder write directly to your production database. The cost of one bad ops user clicking the wrong button is a customer-impacting incident. Don't.
3. How many people will use it, and what's their authority level?
If the answer is "two ops people, both of whom report to the head of CX", the access control is socially enforced. A Marcus tool with a shared password and an audit log is fine.
If the answer is "everyone in support, including the new contractor who started Monday", you need real role-based access, deactivation tied to your HR system, and audit at a level that survives an external auditor's review. That's a tool with proper SSO, SCIM, and a backing service. Engineering should own it, even if the UI started as a Marcus prototype.
4. Is the audit log a "nice to have" or a regulatory artefact?
For a content-moderation tool that's being run for taste, "we have a log" is enough. For a financial reconciliation tool that an auditor will read in six months, the log needs schema, retention policy, immutability, and tamper detection. The first is a Marcus tool. The second is an engineering-owned service.
Most ops teams don't know which category they're in until the auditor shows up. The honest move is to assume regulatory if any part of the data could end up in a court filing or a SOC 2 report. The cheap shortcuts catch up with you when they do.
The right starting pattern
For 80% of internal-tool needs, the right pattern in 2026 looks like this:
- Ops describes the tool to Marcus in plain English: "search by customer email, see this month's transactions, click 'send reminder' to draft an email and stage it for my approval".
- Marcus builds the UI, hosts it, generates a working tool with seeded data the same morning.
- Engineering reviews the data access pattern. If the tool needs to read your real database, engineering exposes a read-only view through an internal API, not direct DB access. Twenty minutes of engineering time, not twenty days.
- For any write actions, engineering provides a small, single-purpose API endpoint with proper logging and authorization. The Marcus tool calls it. The endpoint is the engineering-owned interface; the UI is the ops-owned interface.
- Ops iterates on the UI by talking to Marcus. Engineering rarely touches it after the API is wired.
This pattern has the right divisions of labour: ops gets the velocity of an AI builder, engineering gets a thin, well-defined surface they actually want to maintain, and the parts of the system that need real engineering rigour (auth, audit, write paths) get it.
The graduation moment
Three signals that an ops-owned Marcus tool is ready to graduate to engineering ownership:
- The tool is being used by more than 10 people, especially across multiple teams.
- An external regulator or auditor will read its output. Once SOC 2 or GDPR scope touches the tool, it needs the institutional backing of an engineering team.
- The team has more than three custom integrations to other internal services. At that point, the tool is a small system, not a tool, and engineering should own it.
The graduation should be a planned hand-off, not an emergency. Marcus exports clean — the engineering team takes the static + Git export and rebuilds the parts that need to be rebuilt, keeping the parts that work. Most graduations cost less than the original "build it from scratch in engineering" path would have.
What ops gets wrong, what engineering gets wrong
Common ops mistakes when building Marcus tools:
- Using direct database credentials in the tool. Always go through an engineering-owned API.
- Not asking for an audit log. Every action the tool takes should write a row somewhere, even if it's a Google Sheet.
- Treating "the tool works" as "we're done". Tools rot — schedule a 30-minute review every six weeks.
Common engineering mistakes when supporting ops:
- Building the whole thing themselves "because Marcus isn't enterprise-ready". Two weeks of senior engineering on a refund console is a misuse of senior engineering.
- Refusing to expose any internal API to ops. Read-only API endpoints with rate limits are not a security risk; they're a velocity unlock.
- Not investing in the API layer at all. The pattern depends on engineering owning a clean, small, well-documented internal-API surface that ops tools can call.
The org chart implication
The team that's getting the most leverage out of AI builders for internal tooling has a small, dedicated platform engineering function (1–3 people in a 50-person company, 5–10 in a 500-person company) whose job is owning the internal-API surface that ops, sales, and support teams build against. Those engineers don't build tools — they build the substrate that lets non-engineers build tools safely.
That role is the highest-leverage hire most growing companies aren't making. It pays for itself in the second quarter.