AI that actually ships for small businesses
Most SMB 'AI strategy' is a deck. Here's the three-layer approach we use to pick AI projects that deliver measured business outcomes — not science experiments.
If you run a small or mid-size business and you're getting sold "AI strategy" right now, you're not alone — and most of what's being sold is vapor.
Here's what we actually see working. It's not glamorous, but it ships, gets measured, and pays for itself.
Layer 1: Embedded AI in tools you already own
The highest-ROI AI work in most SMBs is turning on features that are already in the software you're paying for. Your CRM probably has AI-powered deal scoring. Your email tool probably has AI subject-line testing. Your support platform probably has AI summarization.
You don't need an AI strategy to use these. You need 30 minutes with the feature tour and someone to set up the tracking.
When to start here: always. This is the floor, not the ceiling.
Layer 2: Task-specific agents for bounded workflows
Once the embedded stuff is running, the next layer is task-specific AI agents that automate bounded, repetitive knowledge work. Not "do my job," but "do this specific step that I do 50 times a week":
- Extract structured data from inbound PDFs and route to the right system.
- Summarize yesterday's support tickets into a morning briefing.
- Classify inbound leads by fit score and draft a first-touch response.
- Generate first-draft proposals from a structured intake form.
These are tractable. They usually take days or weeks to build, not quarters. They have clear measurement (time saved, error rate, reply quality). And they can be deployed with guardrails — human review on the first N outputs, automatic escalation on low confidence.
When to start here: after Layer 1 is exhausted, and when you can name the specific step, the current cost in hours, and the quality bar.
Layer 3: End-to-end multi-step agents
This is the layer getting all the press. It's also where most SMB projects go to die — because it's where the quality bar rises sharply and the measurement gets fuzzy.
A multi-step agent that plans, retrieves, decides, and acts is a real engineering system. You need evaluation datasets, guardrails, fallback paths, logging, and ongoing tuning. It's doable. But you shouldn't start here.
When to start here: after Layer 2 is running and generating confidence, and only when the workflow it targets is measurably more valuable than what Layer 2 can handle.
The pattern that doesn't work
Starting at Layer 3 because it's exciting. Commissioning a proof-of-concept "AI agent" that impresses in the demo and then quietly gets unplugged three months later because nobody on the team can tell if it's actually helping.
The pattern that works is the opposite: ship Layer 1 in weeks, Layer 2 in months, Layer 3 when you've earned the complexity.
How to pick the first project
Three questions:
- What's a step my team does 50+ times a week that involves reading, summarizing, or routing? That's a Layer 2 candidate.
- Can I name a quality threshold? ("90% of generated drafts don't need edits" is a threshold. "It should be good" is not.)
- Can I afford for the first version to be wrong 20% of the time as long as a human catches it? If yes, you can ship. If no, this isn't a starting project.
If you answered all three, you have your first AI project. Scope it to three weeks. Measure it. Ship it. Then go find the next one.
That's how AI actually arrives in a business — incrementally, measurably, and without the drama.