cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
DrThWillms
Level I

Are the DSD of JMP really definitive sccreening designs?

Ich habe den genauen Aufbau der DSDs bei JMP und RStudio verglichen mit JMP untersucht. Bei RStudio entspricht der Aufbau exakt dem von Bradley Jones im Artikel angegebenen. Bei JMP trifft das außer den bei den Nullen nur begrenzt zu.

Als Beispiel habe ich das Design mit k = 6 untersucht. In der ersten Zeile stimmen die Angaben überhaupt nicht überein (orange), bis auf die Null und zweite Spalte. Bei den anderen sind zwar die Werte zum größten Teil anders (rosa), aber oft genau vertauscht gegenüber den Angaben im Artikel von Bradley (weiß). Das wäre ja wahrscheinlich in Ordnung, aber gleichzeitig ist auch ein Wertepaar gleich, was normalerweise vertauscht sein sollte gegenüber den anderen. 

DrThWillms_1-1719828601736.png

Zuim Vergleich habe ich noch einmal das Schema von Jones darunter gezeigt.

Warum sind die Designs in JMP nicht mit denen von Jones identisch?

 

Grüße

 

Thomas Willms

 

1 ACCEPTED SOLUTION

Accepted Solutions
BradJones
Level I

Re: Are the DSD of JMP really definitive sccreening designs?

As the inventor of DSDs (along with my co-author Chris Nachtsheim), I think I can explain what you are seeing. In our original article in JQT (2011), we created DSDs using an optimization algorithm. That is fixing the location of the zeros we chose the values of -1 and +1 in order to maximize the D-efficiency of the design subject to the additional restriction that all the designs had to be foldover designs. A final restriction was the addition of a center run, which allowed these designs to fit all the pure quadratic effects along with the main effects and the intercept.

 

As a result each row containing a zero and some number of +1s and -1s would be matched by another row having the zero in the same position but having all the +1s replaced with -1s and vice versa. The utility of this is that it makes the main effects orthogonal to the 2nd order effects (two-factor interactions (2FIs) and quadratic effects.

 

In our paper we created designs for every odd number of runs starting with 4 factors (9 runs) and going up to 30 factors (61 runs). We put these designs in the supplementary materials. I imagine that the R implementation copied these designs.

 

One problem with the original version of DSDs is that all the designs using an odd number of factors were NOT orthogonal for the main effects. For DSD using an even number of factors we found orthogonal main effects plans for 6, 8, and 10 factors. However our algorithmic approach failed to find orthogonal main effects designs for even numbers of factors greater than 10. A year later, (2012) 3 Chinese authors introduced DSDs using conference matrices. These square matrices exist for many even numbers of rows (and columns). These authors pointed out that using a conference matrix (C), its foldover (-1*C) and a row of zeros will produce an orthogonal main effects plan for any number of factors for which a conference matrix exists. Thus, the DSDs based on the conference matrix construction are slightly better (more D-efficient) than the original DSDs created using the optimization approach.

 

JMP subsequently began to use the conference matrix approach for creating DSDs. For cases requiring an odd number of factors we created a conference matrix with one extra factor, so that the number of factors was even. Then we constructed the DSD after dropping the last column. Thus, for 5 factors, instead of 11 runs, JMPs designs have 13 runs. However these designs have orthogonal main effects whereas our original DSD having an odd number of factors were never orthogonal. We thought that requiring an extra two runs was usually going to be worth it in order to have an orthogonal design. There might be some cases where those two extra runs might be beyond the budget for the experiment. In that case one could use our original approach. 

 

I should mention that JMPs current version can generate DSDs with an arbitrarily large number of factors. We use a construction approach due to Paley (an algebraist), as well as some constructions due to other pure mathematicians. JMP currently can generate all but a three or so conference matrices for up to the 1,000x1,000 case.

 

Below is a one line JSL script generating a table containing the 12x12 conference matrix.

as table(conference matrix(12))

 

DSDs are a very useful special case of minimal aliasing designs which we also constructed using an optimization approach with fewer restrictions. These designs are also available in JMP by changing the optimality criterion using the red triangle menu of the Custom Design tool.

 

I have now retired from JMP, but I will answer questions addressed to me at brad.jones@adsurgo.com

View solution in original post

6 REPLIES 6
jthi
Super User

Re: Are the DSD of JMP really definitive sccreening designs?

Have you tried reading JMP documentation if it were to explain the differences, Design of Experiments Guide > Definitive Screening Designs (or maybe more accurately Design of Experiments Guide > Definitive Screening Designs > Statistical Details for Definitive Scr... )

-Jarmo
DrThWillms
Level I

Re: Are the DSD of JMP really definitive sccreening designs?

Concerning the description of a DSD I see no difference to Bradley Jones. Concerning the results I see the difference I explained abvove. Points are not situated at the same positions and I don't see where it is explained why different points are taken..

Victor_G
Super User

Re: Are the DSD of JMP really definitive sccreening designs?

Hi @DrThWillms,

 

I think reading the second link provided by @jthi will lead you to an explanation:
DSD are build on conference matrices, and very often several conference matrices for a specific order are available, or may be obtained by flipping the signs of some row and/or column (and by taking permutations of rows and/or columns).

So depending on which conference matrix you start from, you can have slightly different but equivalent designs.

 

You can compare this situation very similarly with fractional factorial design of size 2^(k-1). Depending on which half fraction you choose, you may end up with two design with different experimental runs settings, but the same properties and performances. One example about this situation here : Is it possible to choose which fraction in a fractional factorial design? 

 

In case of doubt, you can still evaluate the two designs you provided and compare them, and see that they do have the same performances and properties :

Victor_G_0-1719833579580.png

Attached are the two designs you provided in JMP format if you want to reproduce the comparative analysis.

 

So back to your question, yes, the DSDs created by JMP are really Definitive Screening Designs (and @bradleyjones wrote quite a few articles about DSDs in JMP : 
Choosing the right tool to design your experiment in JMP 
 
Proper and improper use of Definitive Screening Designs (DSDs) 

...)


I hope this answer will clear your doubts,

Victor GUILLER
L'Oréal Data & Analytics

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
DrThWillms
Level I

Re: Are the DSD of JMP really definitive sccreening designs?

Hello,

ok, I admit the provocation was slightly volontary. On one side I saw and I see that the fold over principle is working also in case of the dsd given by JMP. However, on the other side, I wondered why some points could be the same whereas some points are different. I mean: I would have been less astonished if all points were exactly inversed. On the other side I wonder why JMP has the only program where everything is not the same as in other programs, which makes it difficult to compare - also see the randomisation theme . One time I asked if I can take a dsd from another program and use it in JMP for evaluation. I don't see anymore how this could be possible, if every,thing is so different. Why don't you just take the same values as in the article of JMP. if they are equivalent anyway? Perhaps this program is not as practicasl as I thought.

Victor_G
Super User

Re: Are the DSD of JMP really definitive sccreening designs?


@DrThWillms wrote:

Hello,

ok, I admit the provocation was slightly volontary. On one side I saw and I see that the fold over principle is working also in case of the dsd given by JMP. However, on the other side, I wondered why some points could be the same whereas some points are different. I mean: I would have been less astonished if all points were exactly inversed. On the other side 


As you describe, "some points are the same whereas some points are different" because conference matrices used for DSD have a specific property to respect (see Structure of Definitive Screening Designs) : 

Victor_G_0-1719840629895.png

If you simply change the signs of a conference matrix, you wouldn't respect anymore this property: the product of the modified matrix by its transposed won't be equal to a multiple of the identity matrix. So the "modified" matrix won't be a conference matrix anymore.


Here is the result of the matrices multiplication using the conference matrix of order 6 from wikipedia article (and respecting the property seen before): 

IMG_20240701_155932.jpg

If you simply flip the signs from this conference matrix, you won't save the property of the original conference matrix : 

IMG_20240701_155939.jpg

So it may be more simple to imagine that flipping the signs would work, but it's false in this case, which may help you understand why you have different matrices, and the need of permutations in rows/columns to keep the property of conference matrices. 

 

EDIT : I have made a typo in the example before, the matrix calculations are correct, but not the writing of the negative transpose of the conference matrix, a -1 is missing at row5, column 6.

Doing the matrix calculations with Numpy does provide the same results in the two cases here:

Case1 with conference matrix:

Victor_G_0-1719937773442.png

Case2 with negative conference matrix:

Victor_G_1-1719937812672.png

 


@DrThWillms wrote:

On the other side I wonder why JMP has the only program where everything is not the same as in other programs, which makes it difficult to compare - also see the randomisation theme . One time I asked if I can take a dsd from another program and use it in JMP for evaluation. I don't see anymore how this could be possible, if every,thing is so different. Why don't you just take the same values as in the article of JMP. if they are equivalent anyway? Perhaps this program is not as practicasl as I thought.


You can use a DSD or any other design from any program and use the Evaluate Designs or Compare Designs platforms to evaluate or compare the designs. 

I don't see what's the problem behind the use of similar matrices ? This is exactly the same topic with fractional factorial design, except in this case you have the choice to create your generator and define your aliases. For DSD, this would imply selecting the conference matrix from a catalog of matrices, before creating the design. It may not be very practical for practitioners as it would involve several steps before creating the DSD, and this matrix choice might raise more questions than answers about which matrix to choose ; it would complicate the design creation process and lower the "user-friendliness".

 

If this is an obstacle for you, you can still use the matrix you're used to and import it in JMP. 
I think you may have a different use of JMP than most of the users have, which could explain why you don't see it as practical as other users may find it.

 

Hope this complementary answer will help you,

 

 

PS : Please don't forget to kudos helpful answer and "Accept as a Solution" the answers that solve your initial questions (and the following ones) ! 

Victor GUILLER
L'Oréal Data & Analytics

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
BradJones
Level I

Re: Are the DSD of JMP really definitive sccreening designs?

As the inventor of DSDs (along with my co-author Chris Nachtsheim), I think I can explain what you are seeing. In our original article in JQT (2011), we created DSDs using an optimization algorithm. That is fixing the location of the zeros we chose the values of -1 and +1 in order to maximize the D-efficiency of the design subject to the additional restriction that all the designs had to be foldover designs. A final restriction was the addition of a center run, which allowed these designs to fit all the pure quadratic effects along with the main effects and the intercept.

 

As a result each row containing a zero and some number of +1s and -1s would be matched by another row having the zero in the same position but having all the +1s replaced with -1s and vice versa. The utility of this is that it makes the main effects orthogonal to the 2nd order effects (two-factor interactions (2FIs) and quadratic effects.

 

In our paper we created designs for every odd number of runs starting with 4 factors (9 runs) and going up to 30 factors (61 runs). We put these designs in the supplementary materials. I imagine that the R implementation copied these designs.

 

One problem with the original version of DSDs is that all the designs using an odd number of factors were NOT orthogonal for the main effects. For DSD using an even number of factors we found orthogonal main effects plans for 6, 8, and 10 factors. However our algorithmic approach failed to find orthogonal main effects designs for even numbers of factors greater than 10. A year later, (2012) 3 Chinese authors introduced DSDs using conference matrices. These square matrices exist for many even numbers of rows (and columns). These authors pointed out that using a conference matrix (C), its foldover (-1*C) and a row of zeros will produce an orthogonal main effects plan for any number of factors for which a conference matrix exists. Thus, the DSDs based on the conference matrix construction are slightly better (more D-efficient) than the original DSDs created using the optimization approach.

 

JMP subsequently began to use the conference matrix approach for creating DSDs. For cases requiring an odd number of factors we created a conference matrix with one extra factor, so that the number of factors was even. Then we constructed the DSD after dropping the last column. Thus, for 5 factors, instead of 11 runs, JMPs designs have 13 runs. However these designs have orthogonal main effects whereas our original DSD having an odd number of factors were never orthogonal. We thought that requiring an extra two runs was usually going to be worth it in order to have an orthogonal design. There might be some cases where those two extra runs might be beyond the budget for the experiment. In that case one could use our original approach. 

 

I should mention that JMPs current version can generate DSDs with an arbitrarily large number of factors. We use a construction approach due to Paley (an algebraist), as well as some constructions due to other pure mathematicians. JMP currently can generate all but a three or so conference matrices for up to the 1,000x1,000 case.

 

Below is a one line JSL script generating a table containing the 12x12 conference matrix.

as table(conference matrix(12))

 

DSDs are a very useful special case of minimal aliasing designs which we also constructed using an optimization approach with fewer restrictions. These designs are also available in JMP by changing the optimality criterion using the red triangle menu of the Custom Design tool.

 

I have now retired from JMP, but I will answer questions addressed to me at brad.jones@adsurgo.com