turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- JMP User Community
- :
- Discussions
- :
- Discussions
- :
- Different results analysing Plackett Burman using ...

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Feb 28, 2017 4:38 AM
(1440 views)

Hey!

I have started using JMP Pro 12 only yesterday and am currently trying to master using the DOE function.

Previously to my work a student has designed a Plackett Burman matrix using 11 factors in 12 experiments. Experimental data for the protein concentration was obtained. These experiments were analysed using Statistica which resulted in 4 significant factors at p-values below 0.01. Now we're using JMP Pro hoping that it would be more convenient to work with. But once I input the data and use the analyse -> modelling -> screening tool as explained in the outline I end up with the same 4 factors as contributing most, but with p-values over 0,3. Any idea what I might have done wrong, how these complete different results came to be?

Thank you :)

Solved! Go to Solution.

1 ACCEPTED SOLUTION

Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Mar 1, 2017 5:52 AM
(2771 views)

Solution

To carry Peter's response forward a bit more, the P-B design is for screening. In your case, the 12 runs were designed to screen the 11 factors. The screening approach, in general, and the P-B design, in particular, are successful when the screening principles of *effect sparsity*, *heredity*, and *hierarchy* hold. The projection property of a screening design often allows you to build a model with main effects and two-factor interactions without additional runs for the **small** number of active factors among the large number of candidate factors that were screened. (Note that the two-level designs preclude modeling curvature in the response.)

So the modeling is limited by the design, or to flip that relationship around, the adequate design follows the intended model. If your realistic model is more complicated (many active factors with many effects, linear and non-linear), then the design must also be more complicated than a screening design.

If (roughly) more than half of the candidate factors in the experiment are active or if the magnitude of the effects are not much larger than the standard deviation of the response, you might not detect the active factors well. There is simply not enough data and not all of the combination of levels necessary to sort it out in that case.

The initial analysis might provide some insight, though. It is also possible to use the existing P-B data and augment it to increase the capability to model the response without starting over.

Learn it once, use it forever!

8 REPLIES

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Feb 28, 2017 4:55 AM
(1438 views)

The regression analysis in JMP should yield the same results as Statistica.

The Screening platform uses a different approach that operates under the screening principles of sparsity of effects, hierarchy, and heredity. It creates contrasts based on these principles to be orthogonal. They are likely the same as the parameter estimates in the case of a P-B design. Contrasts are added based on the principles until the model is saturated, so there are no degrees of freedom for estimating the error. Lenth's pseudo-standard errors are computed and used to compute the t-ratios. These ratios do not have a Student t distribution, though, so an empirical sampling distribution is computed by a Monte Carlo simulation. The simulation yields individual and simultaneous p-values for comparison-wise and experiment-wise control of type I errors.

Learn it once, use it forever!

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Feb 28, 2017 4:58 AM
(1437 views)

See **Help** > **Books** > **Design of Experiments Guide** > **Chapter 10: The Fit Two Level Screening Platform**.

Learn it once, use it forever!

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Mar 1, 2017 1:27 AM
(1406 views)

Thank you for the quick reply!

Is this the full factorial designs chapter? It reads to me like I would need more than the 12 experiments that were done for the 11 different factors?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Mar 1, 2017 3:02 AM
(1400 views)

What I would like to know is which of the 11 factors are distributing to the change in protein yield in a significant way. In the 12 experiments different compositions of the factors were used. The protein yield was measured for all 12 experiments. We would like to maximize product yield. If I have 11 factors contributing to a single product yield per experiment can I even analyse that using a simple regression analysis?

Sorry, kind of confused about the whole thing.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Mar 1, 2017 4:58 AM
(1391 views)

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Mar 1, 2017 5:52 AM
(2772 views)

To carry Peter's response forward a bit more, the P-B design is for screening. In your case, the 12 runs were designed to screen the 11 factors. The screening approach, in general, and the P-B design, in particular, are successful when the screening principles of *effect sparsity*, *heredity*, and *hierarchy* hold. The projection property of a screening design often allows you to build a model with main effects and two-factor interactions without additional runs for the **small** number of active factors among the large number of candidate factors that were screened. (Note that the two-level designs preclude modeling curvature in the response.)

So the modeling is limited by the design, or to flip that relationship around, the adequate design follows the intended model. If your realistic model is more complicated (many active factors with many effects, linear and non-linear), then the design must also be more complicated than a screening design.

If (roughly) more than half of the candidate factors in the experiment are active or if the magnitude of the effects are not much larger than the standard deviation of the response, you might not detect the active factors well. There is simply not enough data and not all of the combination of levels necessary to sort it out in that case.

The initial analysis might provide some insight, though. It is also possible to use the existing P-B data and augment it to increase the capability to model the response without starting over.

Learn it once, use it forever!

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Mar 1, 2017 5:57 AM
(1382 views)

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Mar 1, 2017 5:59 AM
(1381 views)

The Screening platform acts as a bridge from your data to your regression analysis. It's purpose is to quickly identify the active effects under the principles of screening. You can click **Make Model** (open the Fit Model dialog) or **Run Model** (open the Fit Least Squares) at the bottom to carry the currently selected contrasts as effects in a regression analysis.

It seems to me that if really only 4 of the candidate 11 factors are active, then you have a good chance to get at least a first-order approximation to the response with possibly some interaction effects. This approximation might be sufficient to optimize the response or to guide you in augmenting the initial experiment with additional runs to improve the model and hence, the optimization.

Learn it once, use it forever!