cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
The Discovery Summit 2025 Call for Content is open! Submit an abstract today to present at our premier analytics conference.
Get the free JMP Student Edition for qualified students and instructors at degree granting institutions.
Choose Language Hide Translation Bar

Deploying Design of Experiments for Hardware Design and Validation Success - Mastering JMP

Published on ‎02-05-2025 03:01 PM by Community Manager Community Manager | Updated on ‎03-27-2025 11:32 AM

Are you looking for ways to determine an effective way to trade off power and speed to find a sweet spot for your semiconductor chip design? Are you an engineer responsible for designing or doing validation testing to assure that your hardware solutions satisfy the requirements or your customers? Do you seek efficient ways to predict the behavior of a system-on-a-chip (SOC) under real conditions?

 

In this Mastering JMP session, we use DOE and JMP Pro predictive modeling to design and evaluate expected performance for a CMOS semiconductor SOC.  We will:

  • Develop a high-quality timing and power model to predict system behavior to ensure that it meets specifications.
  • Correlate circuit simulation results to silicon characterization measurements to validate the circuit design
  • Determine optimal SOC operating conditions to achieve desired performance targets.

 

Video was recorded in March 2025 using JMP 18.

 

Questions answered  by @mzwald , @cweisbart  and  @scwise   at the live webinar:

Q: How do I calculate the Power of each design?

A: DOE Design platform has tools to look at the power of each designed experiment. This works for existing and new designs.  From JMP: DOE>Design Diagnostics>Evaluate Designs. Here is documentation about DOE Power Calculators.

Q: Is this a full factorial split plot DOE w/ center points?

A: The example is a JMP Custom DOE Response Surface Model setup (not a split DOE design) that includes main effects, two-factor interactions, and polynomial/quadratic (curved) terms. With this type of design, the middle settings (center point) is already included in the design so we can judge the curvature and we don’t have to request additional center points.

Q: What's the difference between Remove and Exclude?

A: Exclude still shows up in the list but is removed from the model and has no effect.  Remove will remove it from the model and also the list.

Q: What information can be extracted from the desirability section of the prediction profiler?

A: It's a ranking system, 0 being the least desirable outcome and 1 being the most desirable outcome. You can also save the Design Space Formula exploration and share it with the Profiler and Simulator Tables.

Q: Can the model be written mathematically?

A: Yes. You can save the model formula to the data table.

Q: Does the simulator assume normal distribution, or you can specify other distributions?

A: The simulator has a variety of different distribution options to choose from. Many find Normal and Uniform useful.

Q: After getting the best design space, do you have to do confirmation/optimization tests?

A: It is always recommended to check your model and confirm predictions, do confirmation runs and/or let this info inform the next experiment.

A: In this example, I specified three responses: timing, static power, dynamic power. Total power isn't one of the responses there because we're not measuring total power. Total power is just a formulaic relationship between static and dynamic. We measure static power. We measure dynamic power, and then total power is just the sum of those two. I had four factors, I and defined regression effects using a response surface model.

 

When I click the RSM button, JMP generates the regression effects for the response surface model, in this case the quadratic effects and the two-way interactions on all the factors.  To determine how good the design is, JMP shows you the Design Evaluation Tools and specifically, in this example, you want to use the power analysis tool, which will tell you the power level is for each regression effect for that design under some assumed signal to noise ratio .

 

The default is one, so if we have a signal noise ratio of one to one, we have 95% confidence that this design that we have from the DOE has an 84% chance of modeling the main effect of operating voltage and an 84% chance operating modeling the main effect of the PMOS target.  If you see low power levels,  you might want to consider going back to your DOE design, adding more runs in to increase the power level. See below:

 

 


Start:
Fri, Mar 14, 2025 02:00 PM EDT
End:
Fri, Mar 14, 2025 03:00 PM EDT
Labels (1)
Attachments
0 Comments