Welcome
 | 
My Account

PMMI Podcast

How AI Can Keep Your Throughput Up and Downtime Down

November 26, 2025

In this episode, we sit down with Nick Haase, Co-Founder of MaintainX to discuss how AI is shifting from a buzzword to delivering real, practical value on the manufacturing floor. Nick breaks down where it makes a difference, why good data and frontline involvement are essential, and how keeping people informed helps teams roll out AI safely and effectively.

Speaker

Nick Haase

Nick Haase

Co-Founder, MaintainX

Nick Haase is the co-founder of MaintainX, a platform built to help frontline industrial teams run safer and more efficient operations. With years spent on the shop floor and advising companies on smart factories, IoT, AI, and robotics, he is passionate about modernizing work for the 80% of the global workforce underserved by software. He also invests in 50+ frontier-tech startups and hosts The Wrench Factor, a series exploring the future of maintenance, reliability, and AI-readiness.

Transcription

Sean Riley: So with all the fancy introductions out of the way, welcome to the podcast, Nick.

Nick Haase: Thanks, Sean, for having me. Excited to be here.

Sean Riley: Well, the pleasure is all ours. AI is such a hot-button topic, obviously. It's everywhere across all industries. Say I'm a manufacturer—help me decide where AI is going to add value and where it might not necessarily be worth the hassle.

Nick Haase: What we're seeing across packaging and CPG plants right now is that AI is kind of moving from this buzzword to being boringly useful. At MaintainX, we're seeing the biggest wins when teams go fix the fundamentals: clean data, clear ownership, workflows that fit real frontline operators. Because the goal isn't that robots are going to take over maintenance and operations. It's about how we help get fewer unplanned stops, safer lines, and technology that quietly automates the busy work so that we can get more throughput, improve quality, and things like that.

Sean Riley: I think you guys had some data that 65% of companies said they plan to implement it for maintenance in the next year. What are the mistakes that people are making when they are starting to implement it into maintenance?

Nick Haase: Yeah, for sure. There are a few barriers we're seeing to adoption and why things are struggling to get out of a pilot phase. They're not thinking about scalability—how is this going to work on a line where we maybe have less information or data? We’re seeing data quality issues. The folks that are getting around that are working toward standardizing things like asset hierarchies and failure codes, and enforcing required fields. Again, it’s some of the boring stuff that’s required to give you that foundation that will let you leverage AI. Start small with one line, five to ten critical failure modes. And if you don't know what those are, go ask the folks on the front line. I promise they will know where the issues are, because bad data is really a people-process issue in disguise. These are foundational steps that have to be fixed before you can jump to the fun AI. Other issues are around change management. Adoption beats accuracy—if people aren’t using it, it doesn’t really matter. You can’t have an “AI wizard” that sits in a back office and does all this for you. This has to be something that is inclusive of the folks who are actually on the tools, helping them understand how they’re going to be involved in that data loop.

Sean Riley: I don't know if a lot of people consider that you need the buy-in from the people who are actually going to be utilizing this. It's not just a… And you painted a great picture that there’s not a wizard behind the curtain that’s just going to be plugged in and fix everything. There are situations where people have been recording this data for years. A thing I hear all the time now is that there’s so much data, but so much of it is not necessarily useful.

Nick Haase: Part of it is thinking about co-designing it with shift leads and making them feel like they're engaged. It’s not to say that your line manager or your maintenance manager is going to be an expert in AI—that's not the point. It’s about asking: what are some of the challenges that create frustrations or issues for you on a routine basis that we might fold into this process to see if there’s an opportunity to solve them? If you can make their life easier, a lot of frontline folks aren’t going to care if your uptime is greater. They don’t care if the company makes more money or saves—

Sean Riley: [inaudible 00:03:11].

Nick Haase: —more money. They’re like, “I get paid this much an hour, and there’s no labor shortage. You’re not going to fire me.” So you’ve got to sell them on the things they do care about, which is: “Hey, do you hate doing these things every day? Do you hate not having access to these resources and not feeling supported in these workflows?” Help them understand how their lives are going to get better from this technology. That usually gets them really excited. It’s not as big of a lift as it sounds. It’s about making them feel like they’re part of the process and then celebrating those early wins on the floor with them. Make sure they feel like, “Because you all were able to help us with these things, it’s allowed us to collectively see these results or get some value out of this.” And if you’re in an organization that has multiple lines or multiple plants, you can celebrate them in front of their peers and give them some recognition. Say, “Hey, these folks are helping drive and lead innovation in the business, and here’s how you can benefit from some of their lessons and efforts.” Again, it goes back to those original leadership principles before AI was as prominent.

Sean Riley: Mm-hmm.

Nick Haase: And I think people want to believe, “Hey, people are the hard part of running a business. If we could just automate and AI everything…” But at the end of the day, there's not going to be a light-switch moment anytime soon where we wake up and the people are gone and it’s all robots. You’ve got to continue to find a way to involve people in those processes to make sure they’re successful. I see a lot of folks try to skip that.

Sean Riley: Yeah, and you say that kind of tongue in cheek, but that’s another thing to make them aware of: “This is trying to make your job easier, not replace you.”

Nick Haase: Yeah. And if it’s making their job harder, reconsider what you’re trying to do, because I can tell you you’re pushing a string uphill, and it’s just not going to work.

Sean Riley: Okay, safety and reliability are crucial in modern manufacturing. So how do we design AI to make sure it's deployed in a way that strengthens safety?

Nick Haase: Yeah, it’s an important question. It’s something we think about really deeply. We’ve got an AI tool that will tell people it doesn’t know if it doesn’t have an answer—it won’t try to guess. What I think is most important as a first principle is that humans have to be in the loop for any safety-critical actions. When you’re designing any of these systems, it’s really important that you have somebody with a safety-focused mindset thinking about all the touchpoints in that loop. It’s not to say that they should block things, but they should be able to say, “Hey, maybe for this workflow there should be a human who says, ‘Yes, that’s a good assessment.’” AI can give you tremendous advantages, but there are areas where you need to think of it as decision support until you validate it over time. Any automated action that touches anything related to safety or quality should be going through FMEAs, and you should be making sure you’re doing things the right way. Put guardrails on those systems so there are role-based permissions. You don’t want someone who isn’t really familiar with the technology to be able to go in and tweak things. If you feel like whatever your automations are doing is setting off your Spidey sense, give people an outlet and opportunity to voice those concerns—and then actually do something about addressing them in a way that helps them feel heard. Maybe they’re wrong, and you can help explain why it’s actually okay. But if they’re right, make sure you celebrate that too. That helps—again, going back to that change management piece—helps them feel like they can trust these systems, and they’ll want to support them. It’s a little bit of extra work, but at the end of the day, the amount of time savings and efficiency gains you should be getting from these systems means these extra components are still saving you a tremendous amount of time. It’s not adding extra work; it’s helping you validate these systems while they’re still in various experimental stages. I can’t tell you how many customers we have with packaging lines that are “twin” systems—they bought them at the same time—and they still won’t run the same way. Treat AI similarly, with skepticism, and understand you can’t just plug and copy-paste. Treat these systems with a little skepticism and keep that human in the loop until you start to validate them and prove them out over time. That will help build trust and validate safety. One thing I like to ask folks is, “How long will it take us to become AI-ready?” That could be years in some scenarios. You do need to watch out for those anomalies and fluctuations and not be too laissez-faire, thinking, “Oh, it’s going to work and run on its own,” because the liability is unclear sometimes. Who’s at fault there? Probably the company that’s using it, not as much the vendor.

Sean Riley: We’ve been hyping up the positive things that could happen with this cautious approach, and I’m wondering—is there such a thing as being too cautious? Are we cheating ourselves by not taking advantage of innovation because we’re so concerned with the risk?

Nick Haase: I can’t tell you how many companies and customers I work with where their IT team doesn’t allow them to use ChatGPT. It’s blocked on the website. They have no AI or GPT policies—none of that. And I won’t rat people out, but I walk the plant and see people with their personal phones, taking pictures of stuff and asking their personal ChatGPT.

Sean Riley: Right.

Nick Haase: And people are going to start adopting these tools whether they’re endorsed by their companies or not. That’s actually a lot scarier in a number of ways. You want to have enterprise-grade security plans for your team, a place where they can put company data and know it’s not going into a free version that feeds a worldwide training model. We’re almost certainly going to start seeing scandals where sensitive information is exposed through a training model because some employee at some company is putting in data they’re not supposed to—and it’s going to spit back out somewhere it shouldn’t, if it hasn’t already. So you want to make sure that you’re not being so conservative and restrictive that you just say, “You can’t do this.” Instead, give them the tools and encourage them to be experimental. When I have folks across different departments and divisions play with ChatGPT, they ask it to solve very different problems than I would have thought of. A lot of companies think they have a good overview of the challenges in the organization, but the questions that the frontline operator asks, or the safety person asks, or your materials-handling folks ask are all going to be different. They each provide a unique point of view that AI can help surface and respond to. So I would say definitely encourage experimentation, and part of that responsibility comes with a little bit of training. Help them understand the dangers and precautions—beyond just “don’t use the free versions because you might leak sensitive information.” Also help them understand that AI is not always right. Sometimes it makes stuff up. It’s not gospel. They should check sources and validate. Use it as a tool to support what you’re trying to do, not as a crutch where you take everything, copy, paste, and go.

Sean Riley: This was great. This was a very fair and balanced talk for our audience about some of the things to be cautious about and some of the things to take advantage of. So I want to thank you again for taking time out of your day to come on here with us.

Nick Haase: Yeah, thank you so much for having me.