Kahveh Saramout•March 24, 2023
How to leverage foundation models in manufacturing
For several years, I worked in robotics at a large pharmaceutical manufacturing company, bringing in emerging technology to automate testing processes and manage incoming and outgoing materials. While the work I did ended up saving the company $50M over five years, I realized that the real problem I was trying to solve wasn't the application of the automation assets themselves, but the data architecture that enabled them to communicate, share information to other systems, and execute tasks based on events.
Building a process like this, which takes significant capital investment, just isn't sustainable on a site-by-site basis — it needs to be a standardized architecture that gets deployed globally. This realization is no secret for those in the field — the process of applying emerging technologies, integrating them at the data level, standardizing their design, and deploying them quickly and globally is known as the fourth industrial revolution.
Today, we're standing on the precipice of the fifth: AI.
When I first used tools like Copilot, ChatGPT, Stable Diffusion, and DALL-E, I was blown away. It was immediately clear that I was witnessing a paradigm shift in technology. For the first time, I experienced a computer program that was able to infer context, follow instruction in natural language, use knowledge from other sources, and make decisions in a human-like way. These programs, known as foundation models, can be fine-tuned for a particular application and perform exceptionally well at that application with significantly less development and training than typical algorithms.
Think about all the situations in your factory where you need humans to watch a process, identify an event, and then make a decision based on SOPs or other protocols. The bulk of this work can be offloaded to large language models (LLMs). In situations where a human is still needed, LLMs can significantly improve peoples’ access to information and guide them on how to solve a problem within the framework of their organizations SOPs, technical specifications, and other process defining documents.
Save millions by leveraging large language models
Extracting value from LLMs is easier than you may think. In these examples, I used GPT3.5 and GPT4 to solve problems and make decisions in a simulated factory environment. I shared specific SOPs, tech specs, details of how to communicate with the Manufacturing Execution System (MES), the status of individual manufacturing assets, and the production schedule of multiple manufacturing sites with the LLM.
Using GPT3.5 to help troubleshoot an alarm with a palletizer and manage communications and scheduling
Expanding on this example, I checked to see whether the LLM can help make up for a production shortfall by utilizing capacity at other sites.
Using the supplied production schedule and status of equipment at other sites, the LLM was able to correctly identify capacity at another location, potentially reducing the impact of down time.
I assumed a conservative reduction in the need for three line leads across all shifts, a 12-month implementation time, and a $500,000 capital investment (primarily for contractor time spent building the required data interfaces for existing systems), the financial benefits are substantial. Leaving aside the savings due to reduction in downtime and automated production rescheduling (which has the potential to be massive), over a 5-year period, the net present value (NPV) is estimated at $1 million, with an internal rate of return (IRR) of 66% and a gross return of 260%.
To take it further, I replicated this system at seven other facilities. Leveraging the significant capital investment reduction enjoyed when I replicated assets, I invested $1.4M across 7 sites. In the first year, only our first POC system is online. In year two, we will bring four other facilities online. In year three, we will bring the last two sites online. The value of this project now sits at a very cool $5.5M five-year net-present value (NPV), an internal rate of return (IRR) of 77%, and a gross return of 539%. This is how you supercharge your career — not to mention your company.
Using GPT4 to troubleshoot an alarm
In this example, I took on the persona of a maintenance technician. GPT4 has access to the state of the machine and our tech specs. The LLM communicates with the technician and uses its knowledge of the tech specs to help troubleshoot the alarm code. Without the use of a LLM, the technician would have had to search the document repository for the relevant tech spec and spend time reading and understanding the document. Using the LLM to automate that process gets us to a decision much faster.
Based on my assumptions, the system implementation would take twelve months and cost $500,000 in capital investment. With a 10% productivity increase across the maintenance staff and a 20% reduction in maintenance-related downtime, we will enjoy a five-year NPV of $700,000, an IRR of 53%, and a gross return of 200%.
Once again, I replicated this project across seven sites. The enterprise wide value is a five-year NPV of $5M, IRR of 75%, and gross return of 479%.
Adopting AI and foundation models
The opportunities are endless when it comes to AI and foundation models, but adopting this technology at the usual pace won’t be enough to help you maximize its value — you will need to move quickly. We are in a winner-take-all economy, so lagging behind in the adoption of emerging technologies can have severe repercussions on competitive standing. By hesitating to implement these innovations, organizations inadvertently provide competitors with an opportunity to outpace them. The good news is that we are still early, as even the most advanced manufacturing organizations are just now beginning to develop POCs.
Stay tuned next week for part two of this blog series, where we’ll discuss best practices and tips for building an AI-friendly architecture that can help you turn your unstructured data into powerful, cost-saving solutions faster than your competitors.