How Can You Claim to Do Risk-Based Monitoring If You Don’t Know How Much Noise Is In Your System?
My last article was concerned with the overall lack of using available data and technology to make better decisions, which is endemic within industry as well as people’s personal lives. In fact, Conway’s Law would suggest that how individuals behave influences how organizations and the industry as a whole becomes structured.
A poorly-understood – and even more-poorly mitigated – problem is that of not having a good handle on your organization’s signal-to-noise ratio (SNR). This concept generally comes at us from the field of circuits and transistors, but applies anywhere to any dataset in which you’re trying to discriminate between those pieces of information that actually matter from those which are just white noise (or error). Here’s a major hint: This is every dataset.
Any time observational data are collected for any process in the world, there is always error variance. This comes in the form of unnecessary data spread in your analyses. I would like to tell you that it’s entirely avoidable, however it isn’t. Let me explain why: If we go back almost one hundred years ago to the foundations of the development of quantum mechanics, we learnt that Heisenberg’s Uncertainty Principle puts a limit on how precise you can actually measure anything. Without going into the mathematics, the important issue is that this limit of knowledge isn’t imposed by a lack of quality measuring instruments – it’s due to the very functioning of Nature at the subatomic scale. So there is always noise in every dataset. Now, as it turns out, most of it is on a tremendously-more macro scale than the quantum, but the point is that dealing with noise is not something we will ever get away from.
Signal-to-Noise Ratio is the inverse of the coefficient of variance (and so is the ratio of the mean and the standard deviation of your measurements)
Now, the noise in your organization’s measurements overwhelm much of the ‘true’ signal of what’s really going on and going wrong. Is this just a theoretical academic concern? NO! If the magnitude of error from your measures is larger than the actual information you want to know about, you’ll find yourself chasing problems that don’t actually exist (see Figure 1 below). Or that your CAPA or action plans never seem to ‘really’ fix your issues and that they keep coming back.
Here’s a quick check as to whether you may have an SNR problem: Has your organization ever conducted a Gage R&R on the processes or instruments that are cataloging or measuring your data? If you answered ‘No’ to yourself, you probably have an SNR problem. I can tell you empirically that greater than 99% of businesses do not do this. The actual percent is so low and the execution of measurement systems analysis is so rare, that I’ve only ever seen it twice without having first requested it to occur. So great, why do it then? Well, if you want your business to differentiate itself from your competitors, then you would want to do it.
Key takeaway: If you want better decision-making, get better data.
If you prefer fixing the problems that aren’t… then continue using legacy measurement methods.
ICH E6 Rev 2 – GCP Adopts a Risk-Based Stance
As ICH E6 Rev 2 causes literally billions of hours of work to take shape in the pharmaceutical industry, it is well-advised to understand how some of this can help you.
The 3 main roles and responsibilities are:
- Risk-based quality management
- Risk-based monitoring
- CRO oversight
Quality Risk Management (QRM) principles are to be widely adopted in clinical development. This should include properly approaching Quality by Design (QbD) in such a way that proper outcomes are systematized and not left to chance (or hope).
“The sponsor should develop a systematic, prioritized, risk-based approach to monitoring clinical trials… A combination of onsite and centralized monitoring activities may be appropriate. The sponsor should document the rationale for the chosen monitoring strategy.”
The intended activities to occur from this, based on discussions with ICH, are that only after developing appropriate Quality Management systems and risk-based management processes can organizations adjust or reliably down-scale onsite monitoring visits or source data verification (SDV).
Focusing on what matters most is the underlying principle of risk-based quality management. As mentioned in my previous article, not everything has the same level of latent risk, and shouldn’t be treated as if it does.
Over time, clinical trials have become incredibly complex, and unnecessarily so in many cases. Part of the ICH E6 Rev 2 purpose was to simplify the design of clinical trials. The ICH GCP Guideline encourages innovative approaches for conducting and monitoring clinical trials. There are also standards for the use of technology tools and the management of electronic records and critical documents.
Additionally, the QbD aspects call on organizations to design clinical trials in such a way that quality is ensured through careful and fact-driven planning and execution. These designs are to include well-defined processes, systematized knowledge from past experience, real-world process data, etc. to pare-down elements that don’t reflect reality and could cause defects and deviations to occur (such as inclusion/exclusion criteria). There’s usually a wide gulf between the should-be process and the actually-is process. Remember based on what I noted above, if a process isn’t well-defined (in a QbD sense) and the more variance there is in a process, the more likely there will be gaps and problems, manifesting in a range of outcomes from protocol amendments to deviations and failures.
Putting in place the right systems is the first order of business. Then understand how well things function, and don’t omit a critical appraisal of your business’ level of signal to noise. Make sure you perform risk-based monitoring (RBM) properly, without doing it right you might as well not do it at all; In fact, your outcomes may be worse and introduce more risk to the business than if you did nothing. Ensure that your RBM approach is holistic and data-driven, and that you really intend to adjust your oversight based on RBM findings and act on your risks, in priority-order.
Likewise, oversight of CRO third parties should be codified, otherwise there is a risk of misinterpretation of what oversight entails, and again the introduction of error (noise) from having different criteria requested from different sponsors, or different criteria reported from different CROs. This makes it difficult to know if different third party site data are truly… different, or if they were just collected differently; or the CROs perception of the request was different from what the sponsor intended. All of these things matter, and are what the industry will be cleaning up over the next several years.
If you have any questions, please let me know about them below.
Ben Locwin, PhD, MBA, MS, is President at Healthcare Science Advisors and is an author of a wide variety of scientific articles for books and magazines. He is an expert contact for the American Association of Pharmaceutical Scientists (AAPS), a committee member for the American Statistical Association (ASA), and also a consultant for many industries including biological sciences, pharmaceutical, psychological, and academic. Follow him at @BenLocwin.