Automation carries a reputation for transforming manufacturing operations — and it does, when it's done well. But the honest story is that a significant share of automation projects either fail outright or dramatically underperform their original business case. The technology rarely deserves the blame. Robots work. PLCs work. Vision systems work. What breaks down is everything surrounding the technology: the process it's meant to support, the systems it needs to connect to, and the people who are supposed to work alongside it.
In the previous post in this series, we walked through the Case Study: Our Digital Print Shop Cobot Cell — a real project where several of these challenges showed up in practice. This post generalises those lessons into a framework you can apply before your next automation project begins.
How Often Does Automation Underperform?
Industry research consistently finds that 30–50% of automation projects fail to achieve their stated ROI within the original timeline. The most cited causes are not hardware failures — they are scope creep, poor process readiness, integration surprises, and operator resistance. Getting these four factors right matters more than picking the right robot brand.
We'll walk through the four failure patterns we see most often — over-automation, skipping process standardization, underestimating integration complexity, and poor change management — and give you a concrete prevention framework to work through before you commit to a project.
The most seductive failure in automation is trying to automate too much at once. A manufacturer sees a messy, labour-intensive process and decides automation is the answer — but instead of simplifying the process first, they attempt to automate every step of it, including all the variation, exceptions, and manual judgement calls baked into the existing workflow.
The result is a system with a combinatorial explosion of states. Every product variant needs its own program. Every upstream inconsistency — a part that's slightly off-spec, a tray that's loaded in the wrong orientation — becomes an edge case the automation must handle. What started as a straightforward pick-and-place cell ends up requiring custom vision algorithms, multiple end-of-arm tools, and a control system that no one on the team fully understands.
Over-automation tends to emerge from the requirements phase. The initial scope is reasonable, but then stakeholders start adding: "Can it also handle the 200mm version?" "What about rejected parts — can the robot sort those too?" Each addition sounds small, but the complexity compounds. A system that handles two product variants is roughly twice as complex as one. A system that handles six variants and three reject modes might be ten times as complex, and nowhere near ten times as valuable.
Automation Level
Best Suited For
Risk
Manual
High-mix, low-volume, high judgement tasks
None — but labour-dependent
Semi-automated
Moderate volume, some variation, operator in loop
Low — operator handles exceptions
Fully automated
High-volume, low-mix, well-defined process
High if process has unresolved variation
The discipline to resist scope expansion during project design is one of the most valuable skills in automation engineering. The right question to ask at every scope addition is: "Does this variant justify the added complexity, or should it remain a manual operation?"
Before you automate a process, you need to understand it well enough to define it completely. A robot can't tolerate ambiguity. A skilled operator working manually can adjust for a part that's slightly off-centre, a bin that's not quite level, or a pallet that arrived in a non-standard orientation. A robot will either fail, fault, or — worse — continue operating incorrectly without anyone noticing.
The underlying principle is straightforward: automation locks in your process. If your process has variance and exceptions baked into it, your automation will inherit those problems and make them harder to manage, not easier. The right sequence is always: standardize first, then automate.
What does standardization look like in practice? It means defining exactly what "correct" looks like for every input the system will receive. Incoming part geometry — tolerances, orientation, cleanliness. Fixture and pallet configurations — are they loaded consistently, every time? Upstream process outputs — is the part the robot will pick always in the same condition when it arrives?
Automating a Chaotic Process Makes It Worse
If your manual process relies on operator judgement to compensate for upstream inconsistency — parts arriving in random orientations, variable material quality, inconsistent fixture loading — automation will expose every one of those inconsistencies simultaneously and at production speed. Standardize the upstream process before the robot arrives, or budget for the rework when the robot stops.
A useful test before committing to an automation project is to document the process at the level of detail a robot program would require. If you can't write down exactly what the robot needs to do in every situation — including every exception — the process isn't ready to automate. That documentation exercise often reveals that the process needs a month of process engineering work before a single line of robot code gets written.
Automation vendors are very good at demonstrating their products in ideal conditions: clean demo floors, simple I/O handshakes, one robot, one conveyor, one PLC. Real manufacturing environments are messier. You have equipment from multiple vendors, communication protocols that predate modern networking, legacy PLCs with custom ladder logic that no one on the current team wrote, and ERP systems that the IT department controls and the automation team is not allowed to touch.
Integration is where projects go over time and over budget more than anywhere else. The robot itself might take four weeks to commission. The integration — getting the robot to talk to the upstream conveyor controller, getting the vision system to feed part data to the MES, getting the safety system to interlock correctly with three different machines — might take four more.
Common integration failure points include:
Protocol mismatches: equipment using Modbus, PROFINET, DeviceNet, and EtherNet/IP all on the same line, requiring gateway hardware and translation layers that introduce latency and failure modes
Timing and synchronisation: a robot that assumes a conveyor has stopped before it hasn't, or a vision system that captures an image before the part is fully settled
Data format inconsistencies: a PLC sending a part number as a 16-bit integer that the MES expects as a string, requiring middleware that nobody planned for
IT/OT boundary friction: automation engineers who need network access to commission equipment hitting security policies that take weeks to resolve
Budget Double the Integration Time You Think You Need
Integration time is almost universally underestimated during project scoping. A good rule of thumb: if your equipment commissioning estimate is four weeks, budget six to eight weeks for full integration, testing, and operator handover. Surprises at the integration layer are not a sign of poor engineering — they're a predictable feature of connecting real systems. Plan for them.
The mitigation is early discovery. Before finalising a project scope, audit the communication protocols, data formats, and access permissions for every system the automation will touch. Build that discovery into the project timeline as a formal phase, not an assumption.
Technical failures get most of the attention in automation post-mortems, but the failure mode that's hardest to recover from is human: operators who don't trust the system, supervisors who route work around it, and a shop floor culture that treats the new automation as a threat rather than a tool.
Operator resistance rarely comes from malice. It comes from exclusion. When automation is designed by engineers and presented to operators as a finished fact, operators have no stake in making it succeed. They've had no input into the design, no opportunity to flag the process exceptions that only they know about, and no reason to believe the system will handle their job correctly. When the automation stumbles — and it will stumble during the first weeks of operation — operators who were never bought in will default to "I told you so" rather than "let me help troubleshoot this."
The operators who run a manual process every day are your most valuable source of process knowledge during the automation design phase. They know about the Thursday afternoon parts that arrive from the supplier slightly warped. They know that the first cycle after a shift change has a higher reject rate. They know which product variants cause problems. Excluding them from the design process means you'll discover all of that knowledge the hard way, during commissioning.
Involve Operators Before the Robot Arrives
Bring operators into the automation design process early — not as approvers of a finished design, but as contributors to the requirements. Ask them what the edge cases are. Ask them what they're worried about. Ask them what would make the cell easier to run. You'll get better requirements, fewer commissioning surprises, and operators who feel ownership over the system rather than resentment toward it.
Change management also means clear communication about what the automation is and isn't replacing. Ambiguity about job security creates resistance even among operators who would otherwise be supportive. Be direct about what changes, what doesn't, and what new roles the automation creates.
The four failures above share a common theme: they're all discoverable before the project starts, if you ask the right questions. Here is a pre-project checklist to work through before committing to an automation scope.
1
Scope the minimum viable automation. Define the smallest version of the automation that delivers meaningful value. Resist adding variants and edge cases until the core process is running reliably. You can always expand scope in a Phase 2 — but you can't easily subtract complexity from a system that's already been built.
2
Audit the process for standardization gaps. Walk the current process and document every place where operator judgement compensates for upstream variation. Each of those gaps is a risk to your automation. Decide which ones you'll standardize before automating and which ones you'll explicitly leave as manual operations.
3
Inventory every integration point. List every system the automation will communicate with. For each one, identify the communication protocol, the data formats, the access permissions required, and who controls that system. Flag any gaps as project risks before signing a scope of work.
4
Engage operators in requirements. Schedule working sessions with the operators who currently run the process before design begins. Capture the edge cases, exceptions, and informal workarounds they rely on. Feed that knowledge directly into your automation requirements.
5
Define go/no-go criteria before kickoff. Agree on what "success" looks like at each project milestone. What cycle time, uptime, and quality metrics does the cell need to hit before it goes into production? Having these defined in advance prevents the goalposts from moving during commissioning, and gives everyone a shared definition of done.
This checklist won't prevent every problem — integration surprises and process exceptions will still show up. But working through it systematically reduces the probability of the most costly failures and surfaces risk early enough to manage it.
Some warning signs appear during the project proposal or early design phase that should prompt a harder look at the scope before money is committed.
Red Flag
What It Often Means
"We'll handle exceptions manually for now"
The process isn't standardized; exceptions will become the norm
"The vendor says integration is straightforward"
Nobody has audited the actual systems yet
"We don't need to involve the operators yet"
Buy-in will be a problem at launch
"We can add that variant in Phase 1"
Scope is already growing before design is finished
"IT will sort out the network access"
Integration dependency with no defined owner or timeline
"The ROI looks great at 95% uptime"
Uptime assumptions have not been stress-tested against real process variation
The counterpart to the red flags is a short list of positive signals that suggest a project is well-positioned:
The process runs consistently with low variation today, even manually
Operators have been part of the requirements conversation
Integration protocols have been confirmed with the systems teams
The scope is constrained to one product family or one operation
Success metrics are agreed upon and written down
A project that hits most of those positive signals is not guaranteed to succeed, but it has a much higher probability of delivering on its business case.
This post has focused on what can go wrong and how to prevent it. The next post takes a forward-looking perspective: once you have automation running reliably, how do you design it to stay useful as your products and production volumes evolve? In Future-Proofing: Building Flexible Automation Systems, we'll cover modular design principles, standard communication protocols, and the architectural decisions that determine whether your automation ages gracefully or becomes a liability.
If you're still in the assessment phase of your automation journey, the Is Your Operation Ready for Automation? Self-Assessment from earlier in this series is a structured way to evaluate your process readiness before committing to a project.
Over-automation is a design choice, not an accident. Scope creep during requirements turns manageable projects into unmaintainable ones. Define the minimum viable automation first.
Standardize before you automate. A process that relies on operator judgement to compensate for variation will break an automation system. The upstream process needs to be stable before the robot arrives.
Integration takes longer than commissioning. Audit every communication protocol, data format, and access permission before the project starts. Budget integration time accordingly.
Operators are a design resource, not an audience. Their process knowledge is essential to a complete requirements set. Involve them before design begins, not after it's finished.
Prevention starts with the right questions. Most automation failures are discoverable during scoping. A pre-project checklist — scope, process readiness, integration audit, operator engagement, success metrics — surfaces risk before it becomes cost.
You Now Know How to Avoid the Most Common Automation Pitfalls
With the four failure patterns and the prevention framework from this post, you can approach your next automation project with a structured risk assessment rather than optimistic assumptions. The goal isn't to find reasons not to automate — it's to identify the gaps that need closing before the project starts, so your investment delivers what the business case promised.