turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- JMP User Community
- :
- Blogs
- :
- JMP Blog
- :
- JMP 11: Custom Design user interface for power cal...

Article Options

- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Email to a Friend
- Printer Friendly Page
- Report Inappropriate Content

Dec 11, 2013 8:20 AM

JMP 11: Custom Design user interface for power calculations

Last year, I wrote two blog posts about power analysis for designed experiments. Since then, JMP 11 was released, and the user interface for power analysis in the Custom Designer changed. This post introduces the new interface and shows how to use it profitably.

**Why did you change the interface?**

The primary reason for changing the interface comes from the problem of defining what a Signal to Noise Ratio means when the experiment includes categorical factors having more than two levels. With multilevel categorical factors, there are multiple signals, so a single ratio cannot capture all the possibilities.

A specific example will make this clear. The second edition of *Statistics for Experimenters* by Box, Hunter and Hunter (2005) has a penicillin yield example on page 146. There are two categorical factors. The first, Treatment, is a four-level factor describing four variants of the process of growing the penicillium mold. The second, Blend, is a five-level factor that describes variations in the growth medium (corn steep liquor).

The book provides the yield results of a 4x5 unreplicated full-factorial experiment. Figure 1 shows the 20-run Custom Design table with the observed responses entered into the editable Anticipate Response column.

**Why are you asking for the Anticipated Response? Is this really necessary?**

Many experts in DOE like the idea of having the investigator guess the values of the response for each treatment combination. This serves a couple of purposes. First, people familiar to the process can find factor combinations that will not work by carefully considering all the runs one by one. This can save time and resources by avoiding a useless run. Second, analyzing the data from the guessed responses can indicate which factor effects may not be well-estimated due to their small coefficients. Finally, you do not have to fill in the Anticipated Response column. By default, this column shows the values you expect for each run if the default model is correct.

**What happens when you click the button in Figure 1?**

Clicking the button **Apply Changes to Anticipated Responses** updates the Power Analysis report shown in Figure 2. This report displays the root mean squared error (RMSE) and the estimated coefficients for the given data you supply in the Anticipated Responses column. Notice the button named **Apply Changes to Anticipated Coefficients**. If you change the coefficients in the model and click that button, the anticipated responses in Figure 1 will change accordingly. So, you can either supply responses or supply coefficients, whichever seems easier.

**How do you interpret the Anticipated Coefficients? Why are there only three Treatment coefficients when there are four levels of Treatment?**

Yes, there are four levels of the Treatment factor but only three treatment coefficients. The assumed fourth coefficient is minus the sum of the other three. So, the coefficient of Treatment 4 would be –1*(–2 –1 + 3) = 0. Similarly, the fifth Blend coefficient is –1*(6 –3 –1 + 2) = –4. The Anticipated Coefficients values you see in Figure 2 are the same as the results of running the Fit Model platform on the data shown in Figure 1.

To see this, I ran Fit Model on the data in the Design outline node. The results are in Figure 3. ** Tip:** You can make a JMP data table out of a table in a report by right-clicking inside the table in the report and choosing

The Root Mean Square Error in the Summary of Fit outline of Figure 3 shows the value 4.339739, which matches the Anticipated RMSE in Figure 2. The Estimates column in the Parameter Estimates outline also match the Anticipated Coefficients in Figure 2.

The effect of Treatment in the Effect Tests outline of Figure 3 has a p-value of 0.3387, so this effect is not significant. This is not surprising since Figure 2 shows that the power associated with the full treatment effect is only 0.251.

**What if I want to know how many runs I need to do to have a high power of observing a significant Treatment effect?**

To answer this question, you can click the Back button in the Custom Designer and add more runs to the design. The original design had 20 runs. I chose 60 runs instead and got the power analysis shown in Figure 4. Now the power for the Treatment is 0.776. Suppose the original 20 runs had been a pilot study. This approach let you know that you will need to replicate your pilot study two more times to have a high probability of seeing a significant Treatment effect assuming that the RMSE and the coefficients from the pilot study were correct.

**Any conclusions?**

We replaced the Signal to Noise Ratio in JMP 10 with the Anticipated Coefficients (signals) and the RMSE (noise). For an effect that involves a mulitple coefficients, as in the four- and five-level categorical factor effects discussed above, a single Signal to Noise Ratio could represent an infinite number of possible sets of coefficients. In JMP 10, we reported the results of the worst possible set, which seems a bit too conservative. In JMP 11, by specifiying the exact coefficients of interest, we make the power calculations unique.

Stay tuned. The next post will explain power calculations for screening designs with and without Hard to Change factors.

Article Tags

You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.