Any method of analysis — whether measuring pH, turbidity, sensory, dO2, TPO, or CO2 –will have some inherent error. I think that it’s always best to acknowledge problems up front and be ready to deal with them, so today I’d like to talk about understanding best practices and ways to anticipate and quantify errors, especially if you are in the process of qualifying a new instrument.
I’ve seen a lot of instrumentation improvement through the years, ranging from ease of use and maintenance to complexity of capability. However, even the most sophisticated instruments need the balance of statistical analysis and upfront testing to ensure reliable quality and minimal person-to-person variability. By learning the ways to test and verify an instrument during the demo period or just after purchase, we can learn the strengths and limitations of our analyzer, and know what best practices should be implemented before the data we gather is used in daily production.
Regardless of the type of analysis, the more statistical data we capture the higher the certainty that we understand our instrumentation, but even large amounts of data won’t help if we don’t understand the way our instruments fit into the context of the parameter being measured. For example, we may want to measure the turbidity of beer, but if we don’t understand that copious bubbles can throw off our measurements and that we need to either de-carbonate our product or keep it under pressure, then we can gather all the data points we want, but they won’t tell us what we need to know.
So understanding the context of the thing we want to measure is our first step, but assuming we have that part under control, how can we then experiment with the data from our new instrument, to ensure our measurements are meaningful and that we can trust our results? Here are a few ideas:
- Take multiple measurements. If you’re using a portable instrument, measure multiple types of samples that are similar, but different, and measure multiple times. For example, with a dO2 meter choose three bright tanks that were recently filled and move between the tanks ten times. Record the readings and look at the variability. If this is done in a short amount of time on filtered beer, the readings shouldn’t decay much during testing.
- Have three different people run the same tests, each rotating between the same samples. Record the values and see if there is user variability. Do this ten times per person and analyze the data. Is there a technique issue that yields an erroneous result? If you can correct it, then that information can be used to education future users.
- Understand the variability and expectations of your process. Say you are evaluating a new TPO analyzer. The more you know about your filler, the easier it will be to evaluate your instrument, but statistics can help you regardless. For example, if your instrument is able to measure TPO and not just dissolved oxygen, understand whether it can also compare results on shaken and unshaken packages. Most of the new systems sold today can do both and should yield the same TPO concentration for packages, whether shaken or unshaken. If the results don’t match, understand why. It may point to a problem with the instrument.
My final thought is to use good statistics to drive your process control. Don’t base a decision on one data point. Whenever possible, validate your analyzers on a regular basis. I’ll have more on process and portable instrumentation validation in my next post.