It’s safe to say that right now AI is at the top of every major pharmaceutical company’s strategy list – whether they are focusing on using it for discovering new molecules, streamlining admin procedures, personalising sales and marketing approaches, or optimising manufacturing processes. According to The McKinsey Global Institute (MGI), generative AI alone could generate $60bn to $110bn a year in economic value for the pharma and medical product industries by boosting productivity, speeding up development and approval and improving the way drugs are marketed.

But that’s not the type of AI that holds the greatest promise for enhancing and streamlining quality control processes to ultimately get high-quality drugs to patients faster. “A lot of quality control tests are basically about passing thresholds so the most impactful AI for quality control is predictive models, which are based on either image or numerical-based classification,” explains Dave Latshaw, chief executive and scientific officer at BioPhy, which provides practical AI solutions to the industry – from earlystage biotech to large pharmaceutical companies – to identify and accelerate drug development.

Niranjan Kulkarni, who is senior director of consulting services at CRB, a full-service facility design, engineering, construction and consulting firm for the life sciences and food and beverage industries, which works closely with pharmaceutical companies to help them implement Pharma 4.0 systems, says it isn’t a stretch to assume that, with the integration of AI, most quality-control testing could be done on the production line itself, enabling real-time release testing. “This can improve lead times and can also reduce the need for lab resources such as people, equipment, utilities and space,” he notes.

When a batch of material is produced, it must undergo a series of QC tests to ensure it meets the required standards. These tests can take varying amounts of time, and they often need to be performed at different stages in the process, which can complicate the scheduling. By using advanced models and technologies, it’s possible to streamline the testing process, reducing the overall time needed to release the batch. This not only improves the accuracy of the tests but also speeds up the entire production process, allowing the material to reach the patient more quickly. “Companies are always looking to push that metric to be as fast as possible, and technology really is one of the ways to do it,” Latshaw says.

AI can also help predict the likelihood of a batch meeting quality specifications or any anomalies or deviations from an expected outcome, adds Kulkarni. “This approach can further reduce test turnaround times or even eliminate certain tests,” he says.

AI for QC: two impactful applications

When manufacturing biologics, such as antibodies, one of the biggest concerns is contamination. If microbes enter the batch during production, the entire batch becomes unusable. This not only results in the loss of millions of dollars worth of material but also means that a potential therapy for a patient is wasted – all due to a tiny error. As Latshaw explains, AI models can help identify potential contamination issues before they become a problem. “These models have been shown to outperform humans by days,” he says. By catching contamination early, it’s possible to separate the affected material from the rest, ensuring that only uncontaminated batches continue through the production process. This early detection and segregation is crucial because even a few days of delay in identifying contamination could lead to a catastrophic loss of valuable material.

Another QC test that could benefit from the integration of AI, according to Kulkarni, is dissolution, which measures the extent and rate of solution formation of tablets, capsules, ointments, and so on. “In a traditional setting, this test requires specialised equipment (and in some cases specialised lighting for photo-sensitive products),” he notes. “It can take anywhere from a few minutes to a few hours to complete this test and requires a trained analyst to monitor the test and document the results.

“AI techniques such as artificial neural networks (ANNs) or support vector machines (SVMs) take all the historical data, predict dissolution profiles and document results – all without even running the test.”

Collaborative approach to AI implementation

One major challenge for pharmaceutical companies is that where quality measurements are concerned, it’s far from a case of, as Latshaw puts it, ‘slapping a model on it, getting rid of a person and relying on the model’. “Regulators are not particularly fond of anyone attempting to do that,” he laughs.

To successfully introduce AI models into any QC process, it’s crucial for companies to work closely with regulators early on, ensuring they understand and approve the approach, particularly if there is no industry precedent. “They will then help the company develop the approach in a way that makes it acceptable to regulators,” Latshaw explains. “That’s how you step up a process towards being able to actually use it in a production setting.”

When developing an AI-assisted QC process, the approach also needs to be staged, with the AI models becoming more and more involved over time. The best way to start, according to Latshaw, is to have the model sit side-by-side with the person that usually executes that test.

“You utilise the model as part of that process but the human still has final sign-off. That enables you to understand the model’s capabilities – maybe it meets your needs, maybe it doesn’t. If it doesn’t, you can scrap it or figure out a way to improve it, and if it does, you can try to move forward and integrate it more deeply into the process, with the ultimate goal of having a model replace a physical test or a humanbased test completely.”

At present, this kind of iterative, collaborative approach with regulators is essential, as the technology and its application in this setting is so new, there is still no official guidance on how it can be implemented in particular areas.

“These things usually start as a push from industry where everybody starts saying ‘we want to do this’, regulators work with industry and eventually when it becomes clear that this is a critical mass and it’s going to be the standard for the future, it falls to them to create policy guidance and standards as to how it should be done,” Latshaw explains.

In 2023, the FDA released a discussion guide on the topic of AI in manufacturing and industry experts expect more clarity to materialise in the near future on how to carry out particular AI implementations and which models can be used for specific processes.

How good is your data?

Identifying use cases is critical. However, according to Ryan Thomspon, a senior specialist of Industry 4.0 at CRB Group, that is only the first step. Just as important is ensuring you have good sources of data to train AI systems, and that those systems can be connected to IT infrastructure. “It’s the only way to ensure you will have good results,” he stresses.

For Kulkarni, the first question to ask about data is: do we have enough? “AI models need large volumes of data to train the models. So, if I am in the business of rare diseases, we may have limited data to train and test, in which case AI may not be the best option,” he notes. Secondly, is it high quality enough? “The effectiveness and correctness of AI models are dependent upon the quality of the data utilised for their training. When data exhibit bias or incompleteness, the results may also be biased, or results may be imprecise predictions – garbage-in-garbage-out,” Kulkarni stresses.

Companies also need to ask themselves whether they are expecting to change the product mix/ characteristics significantly and frequently? “If yes, it should be noted that once you train an AI model, it is often challenging to incorporate new data or update the model,” Kulkarni says. “With new methods developed, would it result in faster testing time compared to AI?”

Finally, not only does a pharmaceutical manufacturer need to consider the data surrounding the specific process they are hoping to enhance; the wider data context is also critical. “My top piece of advice is having a data strategy to contextualise manufacturing data with laboratory data,” Thompson emphasises. “Linking process data with assay results needs to be considered before AI can be implemented or scaled. You must look to break down silos in your data.”

Latshaw agrees and says broader data integration across an organisation is the only way to get the true benefits of AI. “Solving specific problems is definitely going to be valuable, but what’s even more valuable, particularly for multinational companies, is figuring out how to make that data work with decision-making and other pieces of the organisation. Undoubtedly, the information that’s housed within QC is actually a part of decisions that are made everywhere else and most of the time you can’t get at it,” he explains. “So how do you actually solve that problem and how do you make it available at scale for the company and improve that decision making? That’s what we’re really interested in at BioPhy. And I think that hopefully, we’ll see a lot more of that because I think it’ll be really helpful to everybody in this space.”

Beyond implementation: the challenge of change management

The technical, data and regulatory aspects of integrating AI into the QC process may sound complex; however, for Latshaw, it is the change management piece that is actually likely to prove the most challenging, particularly for legacy companies.

“A lot of them have been operating for such a long time that if you’re going to introduce a new way of working or doing business, it requires a lot of education, socialisation and buy-in. It’s not just about implementing the technology; it’s about actually getting people to use it and that is often much harder. This is actually a benefit for newer companies, who are able to set their processes up from scratch to be able to take advantage of this technology,” he says.

There is also a fair bit of upskilling that needs to be done in the QC area to allow companies to take advantage of the latest developments in AI. Generally, companies will have subject matter experts – in this case, QC specialists – and also a team of more generalist data scientists with computer science or statistics backgrounds. “But the two don’t communicate in the same language so there are often things that get lost in implementation,” Latshaw says.

He always encourages subject matter experts to get exposure to data science and machine learning because by doing so, they can make themselves very valuable. “There’s probably no other discipline that is as extensively documented, shared and publicly developed as machine learning and AI at this time,” he notes. “The resources are there, it’s just whether these people have the interest and time to pursue that and upskill themselves.”

Companies that encourage this approach stand the greatest chance of reaping the benefits AI could have on quality control and, ultimately, accelerating speed to market. But they would also be wise to take heed of Kulkarni’s final word of warning. “Be patient,” he advises, “Rome was not built in a day – and neither will your AI models.”