cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
mzwald
Staff
Test Time Optimization - Minimize Test Cost Without Sacrificing Quality

If you work in the semiconductor industry (or any industry where extensive testing is performed on products), then reducing test time is a major component to reducing operational costs for manufacturing. Semiconductor test time is expensive, and projected to increase, contributing a significant amount to the total manufacturing cost of a semiconductor device. Reducing test time will translate into large savings, especially for high volume and complex products.

While there is specialized software available to accomplish this task, I wanted to demonstrate how it could be performed in JMP using a custom Test Time Optimization Add-In that is available with two sample files.  Add-ins allow users to extend the functionality of what JMP can do out of the box into custom applications. This is accomplished through JMP scripting and with it, you can create any type of customized functionality. I encourage you to download and install the Add-In and try it yourself with JMP.

addin menu.png

This add-in requires two sets of data: run-on-error (ROE) test data and test time data in the .jmp data table format. Run-on-error is required so that the optimizer add-in can determine which tests have overlapping test coverage, and use that, combined with the test times, to determine the optimal test order. This optimization will not work with stop-on-fail test data. The attached .jmp files to this blog are examples of the format required for the ROE and test time data tables (Note: the ROE data needs to have values of 1 representing a test fail and 0 representing a test pass and each row is a different unit being tested).

GUI addin.png

Upon running the add-in from JMP's Add-In menu, it will prompt you to specify the path where the test data files reside. From there you will specify the .jmp file which contains the run-on-error test results and the .jmp file which contains the test times for each test.  Clicking OK will generate the following report.

Test Time Optimizer.png

The report specifies the full test time (time to run the full test flow), and the optimized test time (time to run the optimal flow) at the top along with the reduction.  In this example, the full test flow takes 300 secs, and the optimal flow takes 143 secs resulting in a 52% reduction in test time. The reported test time reduction does not result in any test escapes based on the ROE data supplied, so the 52% reduction results in no impact to product quality being shipped to customers.

The graphs summarize the data from the optimal test flow. The top-left graphs show the optimal flow order on the x-axis along with the cumulative test time and cumulative fail % of the flow. The add-in will find the test with the highest fail per sec metric and prioritize that test in the flow. The process continues until 100% of all failing units are accounted for. In this example there were 58 tests in the full flow, and the optimal flow has been reduced to the 28 tests shown in the graph (you can also refer to the data table "optimal_test_order.jmp" which is created by the add-in and saved to the specified path). Due to tests often times having overlapping test coverage, it is often possible to remove correlated tests with no impact to quality; and by ordering tests by their fails/sec metric, you are able to catch failures faster during production stop-on-fail testing.

If there is a desire to further reduce test time beyond the optimal flow, then that is the curve shown in the top-right visual. This curve relates test coverage to test time and shows how much test coverage would be impacted by reducing test time further. Note that any further test reduction beyond the optimal flow will result in some quality impact due to test escapes. So, in this example, if we were to remove Test 18, Test 11, and Test 33, we would further reduce test time from 143 to 127 secs, but the impact to quality would be 99.5% test coverage (or 0.5% of tested units escaping) rather than 100% test coverage. To determine this trade-off, you would have to assess the money saved from an additional 16 secs of test time versus the cost associated with a 0.5% test escape rate.

I want to thank @HydeMiller for his great help in developing the add-in.

If you have questions, please comment to the blog post or contact me through email (mark.zwald@jmp.com).

Last Modified: Mar 18, 2024 12:20 AM