Share your ideas for the JMP Scripting Unsession at Discovery Summit by September 17th. We hope to see you there!
Choose Language Hide Translation Bar
Highlighted
statman
Level VII

Re: Dear Dr. DOE, what are some tips on designing experiments for mixed models?

[Editor's note: This post was branched from Dear Dr. DOE, what are some tips on designing experiments for mixed models? ]

 

I run split-plots where I create a factorial of design factors and test them over a factorial of test conditions (typically noise). A cross-product array. It is not due to difficulty to change, but an attempt to increase the inference space (via the subplot factors of noise) and to quantify the noise by factor interactions (robust design). I have not figured out how to get JMP to analyze this appropriately. I have a work around, but it is cumbersome. Example is creating inks for writing instruments. We run an experiment on formulations, mix time, mix speed, etc. and create subplots of Angle of pen, pressure applied to pen, different types of media (all noise to the ink design). Your thoughts would be helpful. Thanks for sharing your extensive knowledge.

2 REPLIES 2
Highlighted

Re: Dear Dr. DOE, what are some tips on designing experiments for mixed models?

You are describing a Taguchi experiment where the inner array is composed of control factors and the outer array is composed of noise factors. Each formulation is used multiple times. The crossed design approach allows for the estimation of all the control by noise interactions. These are important for creating a robust process as you want to find control settings where changes in the noise factors have minimal effects on the response. First, it is not necessary to cross two factorial designs to accomplish this. You can make your control factors Hard to Change and your noise factor Easy to Change in the Custom Designer. In the model you add all the control by noise interactions to the model. The number of whole plots you choose is the number of formulations you want to create. The total number of runs divided by the number of whole plots is the number of subplot runs within each whole plot.

You can use Fit  Model to analyze these data. Then, use the red triangle menu to Save a column with a prediction formula. Take this column to the Prediction Profiler on the Graph menu. Designate your noise factors in the dialog. You get a plot with your response plus a rows for the partial derivative of the response with respect to each of  the noise factors. The goal is to get the response value you want while simultaneously making the partial derivatives as close to zero as possible.

 

Copy and run the table script below to create a table with an example of your kind of problem. There is a Profiler script in the table with graph I described above.

 

New Table( "Bill Ross Example",
	Add Rows( 32 ),
	New Table Variable( "Design", "Custom Design" ),
	New Table Variable( "Criterion", "D Optimal" ),
	New Script(
		"Model",
		Fit Model(
			Effects(
				:Whole Plots & Random,
				:X1,
				:X2,
				:X3,
				:X4,
				:X5,
				:X1 * :X4,
				:X2 * :X4,
				:X3 * :X4,
				:X1 * :X5,
				:X2 * :X5,
				:X3 * :X5
			),
			Y( :Y )
		)
	),
	New Script(
		"Evaluate Design",
		DOE( Evaluate Design, X( Whole Plots, :X1, :X2, :X3, :X4, :X5 ) )
	),
	New Script(
		"DOE Dialog",
		DOE(
			Custom Design,
			{Add Response( Maximize, "Y", ., ., . ),
			Add Factor( Continuous, -1, 1, "X1", 1 ),
			Add Factor( Continuous, -1, 1, "X2", 1 ),
			Add Factor( Continuous, -1, 1, "X3", 1 ),
			Add Factor( Continuous, -1, 1, "X4", 0 ),
			Add Factor( Continuous, -1, 1, "X5", 0 ), Set Random Seed( 596612307 ),
			Number of Starts( 1361 ), Add Term( {1, 0} ), Add Term( {1, 1} ),
			Add Term( {2, 1} ), Add Term( {3, 1} ), Add Term( {4, 1} ),
			Add Term( {5, 1} ), Add Term( {1, 1}, {4, 1} ),
			Add Term( {2, 1}, {4, 1} ), Add Term( {3, 1}, {4, 1} ),
			Add Term( {1, 1}, {5, 1} ), Add Term( {2, 1}, {5, 1} ),
			Add Term( {3, 1}, {5, 1} ), Add Alias Term( {1, 1}, {2, 1} ),
			Add Alias Term( {1, 1}, {3, 1} ), Add Alias Term( {2, 1}, {3, 1} ),
			Add Alias Term( {4, 1}, {5, 1} ), Set N Whole Plots( 8 ),
			Set Sample Size( 32 ), Simulate Responses( 1 ), Save X Matrix( 0 ),
			Make Design}
		)
	),
	New Script(
		"Profiler of Pred Formula Y",
		Profiler(
			Noise Factors( :X4, :X5 ),
			Y( :Pred Formula Y ),
			Profiler(
				1,
				Desirability Functions( 1 ),
				Pred Formula Y << Response Limits(
					{Lower( 25, 0.066 ), Middle( 45, 0.5 ), Upper( 65, 0.9819 ),
					Goal( "Maximize" ), Importance( 1 )}
				),
				Name( "∂Pred Formula Y/∂X4" ) <<
				Response Limits(
					{Lower( -30, 0.066 ), Middle( 0, 0.99 ), Upper( 30, 0.066 ),
					Goal( "Match Target" ), Importance( 1 )}
				),
				Name( "∂Pred Formula Y/∂X5" ) <<
				Response Limits(
					{Lower( -30, 0.066 ), Middle( 0, 0.99 ), Upper( 30, 0.066 ),
					Goal( "Match Target" ), Importance( 1 )}
				),
				Term Value(
					X1( 0, Lock( 0 ), Show( 1 ) ),
					X2( 0, Lock( 0 ), Show( 1 ) ),
					X3( 0, Lock( 0 ), Show( 1 ) ),
					X4( 0, Lock( 0 ), Show( 1 ) ),
					X5( 0, Lock( 0 ), Show( 1 ) )
				)
			)
		)
	),
	New Column( "Whole Plots",
		Character( 1 ),
		"Nominal",
		Set Property( "Design Role", DesignRole( Random Block ) ),
		Set Property( "Value Ordering", {"1", "2", "3", "4", "5", "6", "7", "8"} ),
		Set Values(
			{"1", "1", "1", "1", "2", "2", "2", "2", "3", "3", "3", "3", "4", "4",
			"4", "4", "5", "5", "5", "5", "6", "6", "6", "6", "7", "7", "7", "7",
			"8", "8", "8", "8"}
		),
		Set Display Width( 83 )
	),
	New Column( "X1",
		Numeric,
		"Continuous",
		Format( "Best", 12 ),
		Set Property( "Coding", {-1, 1} ),
		Set Property( "Design Role", DesignRole( Continuous ) ),
		Set Property( "Factor Changes", Hard ),
		Set Values(
			[-1, -1, -1, -1, 1, 1, 1, 1, -1, -1, -1, -1, 1, 1, 1, 1, -1, -1, -1, -1,
			-1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1]
		),
		Set Display Width( 53 )
	),
	New Column( "X2",
		Numeric,
		"Continuous",
		Format( "Best", 12 ),
		Set Property( "Coding", {-1, 1} ),
		Set Property( "Design Role", DesignRole( Continuous ) ),
		Set Property( "Factor Changes", Hard ),
		Set Values(
			[-1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1, -1, -1, -1, -1,
			1, 1, 1, 1, 1, 1, 1, 1, -1, -1, -1, -1]
		),
		Set Display Width( 53 )
	),
	New Column( "X3",
		Numeric,
		"Continuous",
		Format( "Best", 12 ),
		Set Property( "Coding", {-1, 1} ),
		Set Property( "Design Role", DesignRole( Continuous ) ),
		Set Property( "Factor Changes", Hard ),
		Set Values(
			[1, 1, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1, 1, -1, -1, -1, -1,
			1, 1, 1, 1, -1, -1, -1, -1, 1, 1, 1, 1]
		),
		Set Display Width( 53 )
	),
	New Column( "X4",
		Numeric,
		"Continuous",
		Format( "Best", 12 ),
		Set Property( "Coding", {-1, 1} ),
		Set Property( "Design Role", DesignRole( Continuous ) ),
		Set Property( "Factor Changes", Easy ),
		Set Values(
			[-1, 1, -1, 1, -1, 1, -1, 1, 1, -1, 1, -1, -1, -1, 1, 1, 1, 1, -1, -1, 1,
			-1, 1, -1, 1, 1, -1, -1, 1, -1, -1, 1]
		),
		Set Display Width( 53 )
	),
	New Column( "X5",
		Numeric,
		"Continuous",
		Format( "Best", 12 ),
		Set Property( "Coding", {-1, 1} ),
		Set Property( "Design Role", DesignRole( Continuous ) ),
		Set Property( "Factor Changes", Easy ),
		Set Values(
			[1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1,
			-1, -1, 1, 1, 1, -1, -1, 1, -1, -1, 1, 1]
		),
		Set Display Width( 53 )
	),
	New Column( "Y",
		Numeric,
		"Continuous",
		Format( "Best", 12 ),
		Set Property(
			"Response Limits",
			{Goal( Maximize ), Lower( . ), Upper( . ), Importance( . )}
		),
		Set Values(
			[42.7348146566894, 36.189171429355, 44.769471482189, 36.4898442411795,
			59.6102462984316, 46.7297051797999, 57.9866542713854, 49.6693424137701,
			41.4532268569068, 58.2950993674576, 40.3899254847144, 59.4779147187221,
			43.1501858743337, 52.738367825711, 64.6706839593882, 55.2433411262629,
			29.5924301117279, 37.4165044593035, 61.3682127043, 54.2129864134781,
			45.9055183728695, 45.7025946700622, 49.1320040241035, 48.758181908546,
			60.4851311417976, 54.5913985758853, 58.0309758072772, 64.0774047982442,
			48.810716536316, 44.6394149127264, 48.1420425851012, 54.4952535477709]
		),
		Set Display Width( 93 )
	),
	New Column( "Pred Formula Y",
		Numeric,
		"Continuous",
		Format( "Best", 12 ),
		Set Property( "Notes", "Prediction Formula" ),
		Formula(
			49.842461429869 + 4.09946762351868 * :X1 + 2.7889107271487 * :X2 +
			-2.24423598283116 * :X3 + -2.888449088547 * :X4 + 0.982106443023965 *
			:X5 + :X1 * :X4 * 3.28346659528326 + :X2 * :X4 * 1.74098062427039 + :X3
			 * :X4 * 4.15729029616492 + :X1 * :X5 * 1.81202357486527 + :X2 * :X5 *
			1.20487121370369 + :X3 * :X5 * 1.0272331020214
		),
		Set Property(
			"Response Limits",
			{Lower( 25, 0.066 ), Middle( 45, 0.5 ), Upper( 65, 0.9819 ),
			Goal( "Maximize" ), Importance( 1 )}
		),
		Set Property( "Predicting", {:Y, Creator( "Fit Least Squares" )} ),
		Set Selected,
		Set Display Width( 105 )
	)
)

 

 

Highlighted
statman
Level VII

Re: Dear Dr. DOE, what are some tips on designing experiments for mixed models?

Thanks much for your response.  Actually this is based on D.R. Cox original concept of cross-product arrays and Box and Jones ("Split-plot designs for robust product experimentation").  I have tried your approach with some of my experiments The resultant prediction plots may be very useful.  What I fail to get is the Normal and Pareto plots to help assess statistical and practical significance. 

 

What I have been doing is running Fit Model with a saturated model (Full factorial for the factors and Noise factors).  Create a new table from the resultant parameter estimates table (right click, Make into Table).  Create a new column in this new table with the plot designation (WP or SP) and create Normal Plots and Pareto Plots (actually use the absolute value of estimates for the Pareto plot) for each Plot.  I'm a fan of Daniel Plots for several reasons (e.g., graphical and mitigates MSE bias).  This has the added benefit of creating Plots with appropriate effects in each plot.  Since subplot factors (noise) were not changing when the formulation batches were made (and the noise factors were not present when the formulation batches were made), there are two distinctly different experimental error distributions. In essence, the experimental error (noise) is partitioned increasing the precision of detecting factor effects (effects in both the WP and the SP) while increasing inference space at a significant reduction in resources (31 DF's with only 8 batches of material necessary).

 

In any case, I really appreciate your thoughts on this incredibly powerful methodology.

Article Labels