very software vendor has suddenly become an "AI company." Every product demo features a chatbot. Every LinkedIn post promises transformation.
But here's what nobody's talking about: most of what businesses actually need isn't artificial intelligence at all. It's automation—the kind we've had for decades, just applied thoughtfully to the right problems.
The confusion between automation and AI is costing companies real money. Some are over-investing in sophisticated technology for problems that need simple solutions. Others are hesitating on straightforward automation because they're overwhelmed by AI hype.
And when generative AI is the right tool, it comes with serious concerns—security, data governance, access control—that most vendors gloss over entirely.
This post is an attempt to cut through the noise. I'll explain the actual differences between automation technologies, where each makes sense, and the real risks you should be thinking about.
"AI" has become a meaningless marketing term. Everything from a simple email filter to a large language model gets the label. To have a useful conversation, we need better categories.
Here's how I think about automation technologies:
This is the oldest and most reliable form of automation: if X happens, do Y. No learning, no intelligence—just consistent execution of defined rules.
Examples:
Characteristics:
Best for: High-volume, repetitive tasks with clear logic. Most business process automation falls here.
Machine learning models that identify patterns in historical data to classify, predict, or detect anomalies. These systems learn from examples rather than explicit rules.
Examples:
Characteristics:
Best for: Classification, prediction, and anomaly detection where you have good historical data and can tolerate some error rate.
Large language models and similar systems that can generate new content—text, images, code—based on patterns learned from massive datasets. This is what everyone's talking about when they say "AI" today.
Examples:
Characteristics:
Best for: Tasks requiring language understanding, content generation, or handling unstructured information—but only with appropriate oversight and controls.
Here's what I've observed after years of building these systems: the vast majority of operational improvements come from straightforward automation, not artificial intelligence.
When I audit a company's processes, I typically find:
60-70% of opportunities are rule-based automation. Moving data between systems. Routing approvals. Generating notifications. Populating templates. These are solved problems with mature, reliable tools.
20-30% of opportunities might benefit from pattern recognition—document classification, predictive maintenance, anomaly detection. But many of these can also be solved with good rules if you're willing to define them.
5-15% of opportunities are genuinely well-suited for generative AI. Usually involving unstructured text, content generation, or tasks that would require human judgment at scale.
The problem is that vendors are selling the 15% solution for the 70% problem. It's more expensive, harder to maintain, and introduces risks you don't need.
Rule-based automation isn't exciting. It doesn't make for good demos or conference talks. You can't raise venture capital to build "if-then" logic.
But it works. It's reliable. It's auditable. When something goes wrong, you can trace exactly what happened and fix it.
I've seen companies spend six figures on "AI-powered" document processing when they could have solved 80% of the problem with a well-designed form and some conditional routing. The remaining 20% might justify AI—but start with the 80% first.
Generative AI shines in specific situations:
Unstructured input that varies significantly. If every document or message you process is different, rules become impossible to maintain. Language models handle variation naturally.
Tasks requiring synthesis or summarization. Combining information from multiple sources, condensing long documents, or explaining complex material in simpler terms.
First-draft generation. When you need a starting point that a human will review and refine—not when you need finished output.
Semantic search and question-answering. Finding information based on meaning rather than keywords, especially across large document collections.
Handling edge cases at scale. When you have a mostly-automated process but too many exceptions for humans to handle manually.
The key phrase in all of these: "with human oversight." Generative AI is a powerful assistant; it's a risky replacement.
Vendors selling generative AI tools have strong incentives to downplay the risks. But if you're responsible for your company's operations, security, or compliance, you need to understand them.
When you use a generative AI system, your data typically goes somewhere—to an API, a cloud service, or a third-party model.
Questions you should be asking:
Many popular AI tools explicitly state that user inputs may be used for model improvement. That might be fine for drafting a blog post; it's not fine for processing customer contracts or financial data.
The open-source alternative: Running models locally or on your own infrastructure avoids many of these concerns—but requires technical capability to deploy and maintain.
This is where I see the most dangerous gaps. Generative AI systems that can access your documents, databases, or internal systems need robust access controls—but these are often bolted on as an afterthought.
The problem: A language model doesn't understand permissions. If it can read a document, it will use that document to answer questions—regardless of whether the person asking should have access.
Scenarios that go wrong:
Building proper access controls for AI systems is genuinely hard. The technology is new enough that best practices are still emerging. If a vendor tells you it's simple, be skeptical.
When an automated rule makes a bad decision, you can trace exactly what happened: this input triggered this rule, which produced this output. You can fix the rule and move on.
When a generative AI system makes a bad decision, tracing causation is much harder. Why did it say that? Because of training data? The prompt? Some interaction between the two? A hallucination?
Governance questions to consider:
In regulated industries—healthcare, finance, government contracting—these aren't abstract concerns. They're compliance requirements that many AI implementations don't adequately address.
Generative AI models produce confident-sounding output that may be completely wrong. They don't know what they don't know. They can't distinguish between facts they've learned and plausible-sounding fabrications.
For internal brainstorming or first drafts that will be reviewed, this is manageable. For customer-facing applications, automated decision-making, or anything with compliance implications, it's a serious problem.
Mitigation approaches:
None of these are perfect. Hallucination is a fundamental characteristic of how these models work, not a bug that will be fixed in the next version.
When evaluating automation opportunities, I use a simple decision tree:
If you can write down the logic—even if it's complex—rule-based automation is probably the right answer. It's cheaper, more reliable, and easier to maintain.
Signs rules will work:
If you're trying to predict outcomes or classify inputs based on historical patterns, machine learning might help—but only if you have good training data.
Signs ML might work:
If you need to understand, generate, or manipulate natural language at scale, generative AI capabilities may be appropriate—with proper controls.
Signs generative AI might work:
Before implementing any automation—but especially AI:
If you can't answer these questions, you're not ready to implement.
If you're an operations leader trying to figure out where automation fits, here's my advice:
Map your workflows. Identify bottlenecks. Quantify the pain. The best opportunities become obvious when you understand the current state clearly.
The vendor promising AI transformation is selling you something. The consultant recommending their platform has incentives you should understand. Get independent assessment before committing.
Simple automation for simple problems. Complex technology only when complexity is required. You don't need a language model to route invoices.
If you're implementing anything that touches sensitive data—customer information, financial records, proprietary processes—security and access control aren't optional extras. Build them in from the start.
Every automated system needs ongoing care. Rules need updating. Models need retraining. Prompts need refinement. Who will do that work? What's the ongoing cost?
Automation should deliver measurable improvement. If you can't define success criteria upfront, you won't know if you've achieved them.
The opportunity is real. Most businesses have significant inefficiencies that automation can address. Done well, automation frees people from repetitive work, reduces errors, and lets you scale without proportionally scaling headcount.
But "done well" matters. The difference between successful automation and expensive failure usually isn't the technology—it's the assessment, planning, and implementation around it.
The companies I see succeeding:
The companies that struggle:
Knowing which approach you're taking—and being honest about it—is half the battle.
At Rational Boxes, we help companies cut through this noise. Our AI Audit is a structured assessment that maps your actual operations, identifies automation opportunities, and evaluates them honestly—including when the answer is "this isn't worth pursuing" or "simple automation beats AI here."
We're not selling software. We don't take commissions from vendors. Our only incentive is giving you accurate information.
If you're wondering where automation fits in your business—and whether generative AI is worth the complexity—we should talk.
James Hickman is the founder of Rational Boxes, a digital agency serving construction, manufacturing, and engineering companies. He builds AI and automation systems that actually work.