At the latest Made in Group monthly industry meetup, a breakout roundtable on predictive maintenance brought together a dozen manufacturing leaders to confront one of the sector’s most pressing – and often hidden – problems: the cost of downtime.
The session was kicked off by Andy Cheadle, UK Managing Director of Heinrich Georg, who set a striking tone with his reference to the £180 billion “black hole” that unplanned downtime represents across UK manufacturing. The figure prompted a round of raised eyebrows and uncomfortable nods – not just at the sheer scale, but at how many businesses still fail to quantify their own exposure.
Dave Chappell of Crompton Controls admitted that quantifying the cost of downtime is anything but straightforward. “There are so many incidental factors – opportunity cost, reputational damage, diverted labour – it’s never just the lost minutes on the machine,” he said. This view was echoed by Charlie Owen of OE Electrics, who shared findings from an internal study tracking 'disturbances' to production over an 18-month period. “We discovered we’d potentially lost £2 million in revenue,” he revealed, “just by calculating hourly productivity per head not spent making parts.”
Smarter machines and safer margins was the next theme, as participants discussed how digitalisation and intelligent systems can help preempt the causes of downtime. Steve Walker from Frederick Cooper talked about deploying overflow booths to ensure production doesn’t halt when maintenance is underway – a tactic mirrored at Quantamatic, where Lauren Wheeley explained how using spare machine capacity allows engineers to tackle issues without shutting everything down.
Yet, despite smarter hardware, Vlad Cazan of KFactory highlighted the persistent challenge: “To measure downtime accurately is the hardest thing. The first step is visibility – understanding not just when it happened, but why, and what actions brought it back online.” He pointed to the value of IoT and AI in bridging the gap between data and decision-making. “Tech accelerates insight – but you still need the data,” he stressed.
Culture change was a recurring theme, especially the shift from reactive to predictive mindsets. Mariam Taha of Fluere observed that many factory teams are “trapped in firefighting mode,” with little time or space for long-term fixes. Tommy Fisher of Lean Controls added that any transition must be grounded in strategy. “You’ve got to get the data upstream into the process,” he said. “Too often, companies stall because they don’t have a clear implementation plan.”
He noted how even basic data like energy usage can mean different things to different stakeholders. “Engineers might want kilowatts and volts; finance wants pounds and pence. Bridging that language gap is part of the process.”
Lauren from Quantamatic recalled operational bottlenecks in previous roles where huge peaks and troughs in output – particularly at month-end – were driven by weeks of silent downtime beforehand. “It looked like we were efficient,” she said, “but it was chaos under the surface.”
The session closed with a poignant reflection from Andrew Neilson of BCU, who asked what finally triggers a shift from firefighting to foresight. For Andy Cheadle, it came down to making the business case. “You start with what you know,” he said, “and work from basic monitoring to prevention. But it’s expensive – most companies only change when the pain of doing nothing becomes too much.”
As the roundtable dispersed, one thing was clear: predictive maintenance isn’t just a technical challenge – it’s a cultural one. And for those willing to take the leap