Subscribe Bookmark
ryan_lekivetz

Joined:

Nov 1, 2013

Using a covering array to identify the cause of a failure

My last blog entry discussed using a covering array to test the Preferences for the Categorical platform. While the hope is that all of the tests pass, in this blog we consider what we can do when one of those tests fails. When you use “Make Table” after creating a design in the Covering Array platform, there are two important pieces to pay attention to: the first column with missing values labelled “Response”, and a table script called “Analysis”. The Response column uses values of 1 and 0 to correspond to whether or not a particular run passed or failed according to what we’re measuring. For the Categorical platform, a value of 1 would be recorded if the platform behaves as expected, and 0 if there’s a fault of some kind. The “Analysis” script is to be used once the Response column is filled in.

CA4

When performing the test, ideally we observe all 1’s in the Response column. In the Data Table provided on the File Exchange,I went through and collected some hypothetical data: each of the runs passed except for the last one. What do we do with this failure? It would be nice to narrow down the potential causes.

Analysis

The first place to look for a cause would be if any single factors could have caused the failure. Since each preference option occurs in more than one row and everything else passed, it’s not a single factor causing the issue. The next likely candidate would be a 2-option cause. We have 45 Choose 2 = 990 different 2-option combinations in that row. That doesn’t seem very informative. However, many of these combinations appear elsewhere in the design and the platform passed those tests, so those can be eliminated as potential causes. Going through the list of potential causes and eliminating those that have appeared elsewhere would be a tedious task – which is what the Analysis script takes care of for us. Running that script:

CA5

The Analysis report shows potential causes for the failure, which is greatly reduced from the 990 pairs contained within the row containing a failure to just 16. What’s more, the testing has been simplified to having pairs to check.

Any kind of information would make this process even easier. In our example, the tester has knowledge that the “ChiSquare Test Choices” were recently updated, and can first look at those 2 cases. It’s worth noting that clicking on any of the potential causes highlights the failure row and columns corresponding to it. This is useful if you’re dealing with many rows and/or columns and want a quick way to be able to subset the table.

Final Thoughts

We went from a task of testing preferences that looked impossible to something that gave reasonable coverage in just 16 runs. We could go even further by creating a strength 3 covering array – with some optimizing, I found a design that had 65 runs (and the 4-coverage was over 96%). Constraints where some combinations cannot be run together can also be accommodated with disallowed combinations. Any luck using covering arrays in your own work? Leave me a comment and let me know. Thanks for reading!

Article Tags