The Punctuated Evolution of Computer Software Validation

Published on: 
Pharmaceutical Technology, Pharmaceutical Technology, February 2023, Volume 47, Issue 2
Pages: 32-33

Shifting guidance and the growing prominence of computer software assurance exemplify the state of computer software validation.

The exact state of computer software validation (CSV) can be tricky to pin down. While industry insiders can call it slow to change, attempting to sort through various complex procedures, approaches, and technologies to develop a truly validated process can be mystifying for those on the outside looking in.

To remedy this ambiguity, Pharmaceutical Technology spoke with G. Raymond Miller, PhD, course director for Computer System Validation at The Center for Professional Innovation and Education. Miller spoke on various topics, including the rise of computer software assurance (CSA), how to contextualize new draft guidances from regulatory authorities, and common CSV pain points.

Embracing alternative approaches

PharmTech: FDA recently put together a draft guidance on CSA that describes a new validation approach. How does this work, where does it differ from historical approaches, and why is FDA implementing it?

Miller: First, CSA is not replacing computer validation; it’s an alternative approach to the process that has evolved since the mid-1980s. For decades, CSV has been an effort in good documentation practice. The outcome has too often been perfect documentation that has little to do with correct system operation and ends up locked away in the event an investigator asks for it. The intent of CSA is to focus more on challenging system operations the way we intend to use them, not producing piles of beautiful but meaningless documentation.

As FDA’s General Principles of Software Validation points out (1), software doesn’t wear out; given the same inputs it will supply the same output. The results of computerized operations can vary when the inputs change. We don’t need to run a test multiple times with the same inputs; we need to inject a variety of inputs intended to generate errors if we can. We want to break it during testing, not in production, when errors might harm the public directly, or maybe through other impacts such as business disruption. CSA represents current thinking about how to make the CSV process value-added.

With regard to CSA challenges in testing, I’m still trying to wrap my head around testing without reviewable evidence. Not because of a perceived need for proof, but to overcome human nature to take the easy way out. Say it’s Friday afternoon, we’re tired and ready to head out for the weekend. We might be a little less diligent about noticing and investigating a discrepancy that occurs during testing; especially if we aren’t required to have documented evidence. I’ve seen this happen too many times over the years. If we must have documented evidence and sign for our actions, we’re not going to be able to bypass procedures and ignore those discrepancies. How can we focus more on testing and yet still ensure due diligence?

Guidance leading the way?

PharmTech: The Center for Devices and Radiological Controls (CDRH) also put together a list of planned guidances for 2023 for computer validation of various devices (1). Broadly speaking, what are some of the key changes we can expect out of these guidances? How are companies preparing for this?

Miller: My interests are in the CSV process itself; I see it as just good IT [information technology] business practice, so it really applies across the board. The guidance in CDRH’s list that stands out to me is Cybersecurity in Medical Devices. I see that as a documented awareness of the serious impact cybersecurity has on patient safety.

But I also think the current thinking has broader implications to CSV in general, increased emphasis on hazard analysis, risk mitigation, and scenario testing regarding cybersecurity and data integrity. The identification of ‘intended functions’ for validation should include functions intended to mitigate hazards through risk management, including more robust testing. These are just good IT business practices. Companies should be proactively building appropriately controlled systems, not waiting for guidance documents to be finalized to start planning actions.

Contextualizing industry shifts

PharmTech: Would you characterize computer validation as a segment of the pharmaceutical industry that is regularly seeing significant changes in technology or approaches, or is it more incremental? Outside of the changes FDA is outlining, do you know of any recent technologies or approaches that are shifting the way computer validation is handled?

Miller: I would describe it as punctuated evolution. There is a fear of change, so companies stick with legacy, that is, documentation-focused validation, ‘just to be sure!’ They wait for others to show it’s okay to evolve. Often, the perspective is that there’s only one way to validate because, ‘That’s how we did it at company XYZ’. We must cultivate awareness of the need to adapt to the fact that different systems need different approaches: browser accessed, databased apps vs. data acquisition, and processing apps vs. real-time process control apps are three general categories that have different validation approaches.

In this vein, less emphasis is being placed on everything we intend a system to do and is instead shifting toward the intended uses around data integrity and security risks. We used to revalidate if we moved an instrument from one lab bench to another. Today we may not validate a change that impacts only a graphical user interface if there is no impact on data integrity and security.

Advertisement

Understanding periodic review

PharmTech: Periodic review of good practices (GxP) systems is often not considered in discussions surrounding computer validation. Why is this needed, how does it differ from initial validation, and how does it factor into the bigger picture of ‘achieving validation’?

Miller: I'm surprised that periodic review might not be considered as part of the post-implementation operation and maintenance procedures—it is! Validation includes ensuring there are procedural controls around operation and maintenance of systems during use. GXPs expect that people are trained on those procedures before they are authorized to use the computerized systems. EU Annex 11 for GMPs [good manufacturing practices] and Q7 for APIs both address periodic review.

The concept is that, once a system is validated for use in regulated operations, there should be procedures for maintaining it in the validated state. Change control is one procedure: a change may not be made without an evaluation of the impact of the change and suitable validation activities to return the system to its validated state if the change is allowed to proceed. The concept of periodic review is more of a QA [quality assurance] activity to evaluate the effectiveness of the operation and maintenance procedures, and if there is any evidence that the system might no longer be operating in the validated state. Any discrepancies should be documented, and a change control action initiated for resolution. Periodic review does not often result in revalidation unless the change control procedures are not being followed.

Isolating pain points

PharmTech: What are some of the most common pain points in computer validation? What, if anything, is being done to resolve them?

Miller: One is company culture: there is a targeted delivery date, but the project sees too many delays. Less and less time is available for testing and finalization. Effective project management up-front helps so that stakeholders are aware that failure to follow procedures will result in schedule delays, not substandard actions.

Something I’ve also seen frequently is that QA is not brought into the process until the project is in its final stages. There is either the need to stop progress for corrections or pressure for QA to accede and reluctantly approve. Prevention comes from procedures and education on the importance of following the procedures. I’ve told some in the past, ‘Retrospective validation is just acknowledgement that proper validation and software development life cycle procedures were not followed from the beginning.’

Another is inadequate testing such that serious defects are encountered at a late stage or even in production. This can be mitigated through hazard analysis and risk management starting at the requirements stage. Use the results to proactively define the intended testing and data sets to catch the problems before the impact is even worse.

Finally, waiting until the validation is mostly completed to prepare the summary report. People must reread and summarize the actions, resulting in long delivery times and pressure to release the system for use, causing even further delays in the summary report. Instead, draft the summary report sections in the beginning and add to them as the validation proceeds. As soon as the last action has been completed, there is only one item to be completed in the summary report and it is ready for approval, releasing the system for use in regulated operations.

Closing thoughts

PharmTech: Do you have any other thoughts you’d like to share on computer validation?

Miller: One final recommendation that I have made over the years and where CSA concepts are already having an impact on testing: write test cases to tell the tester what to do, not how to do it! Legacy test case-writing practices have test authors run through the software, recording what steps they had to take to test the system (heavily scripted test cases). There are many problems with this:

However, testing in front of the software tests what it does, not what it should and should not do. This usually calls for running through the scenario to get the steps written correctly, and dry-run testing then takes time to make sure the test case was written correctly. Then the test case is reviewed and approved by others before it can finally be executed by someone else, for the ‘official’ test documentation. There is no value in dry-running the test cases; eliminating them saves much time and effort.

And if we write the test case from the requirement risk assessment telling the tester what to do but not how to do it, we get many benefits. Because we’re testing what we intend it to do, not confirming what it actually does, we have a better chance of finding defects. There are also no test navigation steps written incorrectly or questionable execution, so errors are usually real system issues. Additionally, the test authors can write the test cases while the system is in development, the test cases could possibly be used to direct development or mitigate issues up-front, and the tester is free to explore the system operation as the test proceeds and track down anomalies.

These are just my current thoughts on computer validation from experience with many hundreds of computer validation projects over the years. Not everyone will agree with them, but I hope they will consider the merits and offer useful alternatives. I’m always open to new approaches and improved processes with the general goal of keeping computer system validations practical yet defensible.

References

1. FDA. General Principles of Software Validation; Final Guidance for Industry and FDA Staff (CDRH and CBER, January 2002).

2. FDA. CDRH Proposed Guidances for Fiscal Year 2023 (CDRH, October 2022).

About the author

Grant Playter is the associate editor for Pharmaceutical Technology.

Article details

Pharmaceutical Technology
Vol. 47, No. 2
February 2023
Page: 32-33

Citation

When referring to this article, please cite it as G. Playter. The Punctuated Evolution of Computer Software Validation. Pharmaceutical Technology 2023 47 (1).