Choose Language Hide Translation Bar
calking
Staff

I Have the Power!!: Power Calculation in Complex Modeling Situations (2020-US-30MP-544)

Level: Intermediate

 

Caleb King, Research Statistician Developer, JMP Division, SAS Institute Inc.

 

Invariably, any analyst who has been in the field long enough has heard the dreaded questions: “Is X-number of samples enough? How much data do I need for my experiment?” Ulterior motives aside, any investigation involving data must ultimately answer the question of “How many?” to avoid risking either insufficient data to detect a scientifically significant effect or having too much data leading to a waste of valuable resources. This can become particularly difficult when the underlying model is complex (e.g. longitudinal designs with hard-to-change factors, time-to-event response with censoring, binary responses with non-uniform test levels, etc.). In this talk, we will show how you can wield the "power" of one-click simulation in JMP Pro to perform power calculations in complex modeling situations. We will illustrate this technique using relevant applications across a wide range of fields.

 

 

Auto-generated transcript...

 


Speaker

Transcript

Caleb King Hello, my name is Caleb king. I'm a research statistician developer here at JMP for the design of experiments group.
  And today I'll be talking to you about how you can use JMP to compute power calculations for complex modeling scenarios. So as a brief recap power is the probability of detecting a scientifically significant difference that you think exists in the population.
  And it's the probability of detecting that given the current amount of data that that you've sampled from that population.
  Now, most people, when they run a power calculation, they're usually doing it to determine the sample size for their study there, of course, is a direct
  Tie between the two. The more samples, you have the greater chance you have of detecting that scientifically significant difference
  Of course, there are other factors that tie into that. There's the the model that you're using the response distribution type.
  And there's also, of course, the amount of noise and uncertainty present in the population, but for the most part people use power as a metric to determine sample size. Now, I'll kind of say there's kind of three stages
  of power calculation and all of them are addressed in JMP, especially if you have JMP Pro, which is what I will be using here.
  The first stage is some of those simpler modeling situations where we go here under the DOE menu under Design Diagnostics. We have the sample size and power calculators.
  And these cover a wide range of very simple scenarios. So, if you're testing one or two sample means, you know, maybe an ANOVA type setting with multiple means,
  proportions, standard deviations. Most of this is what people think of when you think of power calculations. So, of course, you go through and you specify again the noise,
  error rates, there's any parameters, what difference am I trying to detect, and say for I'm trying to compute a certain power I can get the sample size.
  Or, if I want to explore a bit more. I can leave both as empty. I get a power curve. Now, of course, again, these are more of your simpler scenarios. The next stage, I would say, is what could be covered under a more general linear model so exit out of
  In that case, we can go here under the all encompassing custom design menu.
  I'll put in my favorite number of effects.
  I'll click continue.
  And I'll leave everything here.
  So we'll make the design.
  And at this point I can do a power analysis based on the anticipated coefficients in the model. So in this case, it might say, I have for this particular design under 12 runs. I have roughly 80% power to detect this coefficient. If I was trying to detect say something a bit smaller.
  I could change that value, apply the changes, of course. See, I don't have as much power. So if that's really what I'm looking for. I might do to make some changes. Maybe I need to go back and increase the run size.
  So, those are the two most common settings that we might do a power calculation, but of course life isn't that simple know you might run into more complex settings you might have mixed effects factors you might run into a longitudinal study that you have to compute power for.
  You might run into settings where your response is no longer a normal random variable, you might have count data, you might have a binary response. You might even have sort of a bounded 0/1 type response. So a percentage type response.
  So, what can you do if you can't go to the simple power calculators and maybe the DOE menu it might be too complex for even this to run a power analysis. Well JMP Pro's here to help and involves a tool that we call one click simulation.
  So the idea here is, we'll simulate data sort of through a Monte Carlo simulation approach to try and estimate the power that you can get for your particular settings.
  And it's pretty straightforward. There might be a little bit of work up front that you need to do at least depending on the modeling platform.
  But once you've got it down. It's pretty straightforward to do.
  And I'll go ahead and say that this was something I didn't even know JMP could do until I started working here. So, I'm happy to share what I found with you.
  Alright, so we'll start off with sort of as a simpler extension of the standard linear model where we incorporate some mixed effects. Okay.
  So we'll start, we have a company that's looking to improve their proton protein yield for cellular cultures. Not protons but proteins.
  temperature, time, pH. We also have some mixture factors.
  Water and two growth factors. Now, at this stage, if we stopped here, we probably would still be able to use the power calculator available in the custom design platform.
  Where we start to deviate is now we introduce some random effect factors we have three technicians, Bob, Di, and Stan, who are representative of the entire sample of technicians.
  And they will use at least one of three serum lots, which is again a representation of all the serum lots, they could use unless we treat them as random effects.
  We also have a random blocking effect. In this case, the test will be conducted over two days. And so I'll show you how we can use one click simulation and JMP Pro to compute power for this case. So click to open the design.
  So this was the design that I've created, let me expand my window here so can see everything. Now this might represent what you typically get once you've created the design.
  Again,
  at this point, you could have clicked simulate response to simulate some of the responses. But even if you didn't, it's still okay
  A trick that you can easily use to replicate that is simply create a new column will go in. We won't bother renaming it at this point, we're just going to create a simple formula.
  Go here to the left hand side. Click random random normal
  leave everything default click Apply. Okay.
  And we've got ourselves some random noise data. Some simulated response data. Okay.
  At this point, I'll click right click, copy
  And right click paste to get my response column.
  Now all I need is just some sort of response. So simple random noise will work fine here. We're not trying to analyze any data yet. What we want is to
  use the fit model platform to create a model for us that we'll then use to create the simulation formula. The way we do that, we'll go under a model. Now I've done a bit of head work here.
  So, I've already created the model here. And just to show you how I did that. I'll go under here under relaunch analysis under the redo.
  So, here you see I have my response protein yield hello my fixed effects. I've got some random effects.
  I did everything and get everything pretty standard there.
  Now you see there's there's a lot going on here. We don't need to pay attention to any of this. We are just interested in creating a model. At this point the way we do that is we go into here under the red triangle menu.
  Will go under saved columns. Now we need to be careful which column we select. If I select prediction formula which you might be tempted to do. That's good. But it doesn't get us all the way there, as you'll see.
  If I go into the formula. This is the mean prediction formula. There's nothing about random effects here. So this isn't the column I want. It's not complete doesn't contain everything I need. I need to come back.
  Go under save columns again and scroll down here to conditional predictive formula and note from the hover help that's includes the random effect estimates, which is the one I want.
  Now, you might be any case where you don't really want to compute power for the random effects. You want to just for the mean model, in which case
  You could have easily gone back to the custom design platform and done it that way. Let's pretend that we're interested in those random effects as well.
  Now we've saved their conditional predictive formula.
  Again, we'll go in, look at the formula.
  And here you can see we have a random effects. Now we need to do some tweaking here to get it into a simulation us that we want. So I'm going to double click here.
  Is puts me into the JMP scripting language formatting.
  Now, first I'll make some changes to the main effects. And I'm just going to pick some values. So let's see. Let's do 0.5 for temperature
  0.1 for time.
  And for pH. Let's do 1.2 a little bit higher.
  For water. I'm going to go even higher. So, these might have larger coefficient. So I'll do 85 for water.
  I'll do 90
  For the growth first growth factor. And let's do 50
  Growth Factor, too. Okay.
  Alright, so I've made my adjustments to the main mean model portion. Now again, these are parameters that you think are scientifically important
  Now for the random effects. You might be tempted to replace it with something like this. Okay. That should be a random effects. So I'll just put a random normal here.
  And it kind of looks right but not exactly. And the reason is this formula is evaluated row by row, what's going to happen is the first time you come across a technician named Jill.
  You will simulate a random value here and you'll get a value for that formula evaluation, but the next time you go to jail. I wrote six here.
  This will simulate a different value, which then defeats the purpose of a random effect random effects should hold the same value every time. Jill appears
  That it's going to take on the effect of something like a random error which I'll take this opportunity to put here that is a value that we want to change every row. So how do we overcome this well.
  I tell you this because I actually ended up doing this the first time I presented this slightly embarrassing.
  And thankfully, my coworker came along. Afterward, and showed me a trick to how to actually input the random effect appropriately and here's the trick.
  We're gonna go to the top here and type if row.
  Equals one
  I'm going to create a variable call it tech Jill.
  And now here's where I place it
  What this trick does will replace this random normal with tech Jill.
  What this will do is if it's the first row we simulate a random variable and assign it inside the value of this parameter to that variable to that value.
  Under the first row, we don't simulate again, which means to tech Jill keeps the value was initially given and it will hold every place we put it
  So we will do the same.
  For Bob
  As you can see that will accomplish the task of the random effect.
  PUT BOB here for Stan things are a little bit easier. We don't have to simulate for him because random effects should add up to zero in the model.
  And so the way we do that.
  We make his be the opposite side.
  Of the some of the other effects.
  Do the same thing here for serum lot one
  Now for this one I'm going to give it a bit more noise.
  Let's say there's a bit more noise in the
  Serum lots
  And this is the advantage of this approach is you get to play around with different scenarios.
  Input those values here.
  Okay.
Caleb King And again, this one.
  Some of the others. And before I add the other one. I'll go ahead and just add it here as things makes it easy, day one.
  Negative day one.
  And I'll add it's random effect here and I'll say that it's random effect.
  I can type
  Is a bit smaller.
  Alright, well, at this point, we should be, we should have our complete simulation formula. If I click OK, take me back to the Formula Editor view.
  We should be good to go.
  Alright, so there's our simulation formula.
  Now for next, what do we do next, we'll go back to our fit model.
  And we're going to go to the area where we want to simulate the power
  Here I'm going to go under the fixed effects tests box. I'm going to go here to this column is the p value in this case original noisy simulation didn't give us any P values. That's okay. We don't care about that.
  We just needed this to generate the model, which we then turned into a simulation formula. I'm going to right click under this column. Now remember, this only works if you have JMPed pro
  And here at the very bottom is simulate. So we click that.
  And it's going to ask us, which column to switch out. So by default it selects the response column and then it's going to go through and find where all the simulation formula columns. So we want to switch in this one because this one contains our simulation model.
  tell it how many samples and to do 100
  I'll give it my favorite random seed.
  And I click OK.
  Wait, about a second or two.
  And there we are.
  So it's generated a table where it's simulator response. It's fit the model.
  And is reporting back the P values. Now there are some cases where there are no P values we ended up in a situation so much of what started and that's okay. That happens in simulation, so long as we have a sufficient number to get us an estimate.
  Now the nice thing about this is JMPed saw that we were simulating P values. So it's it. I bet you're winning to do a power analysis and it's happily provided us a script to do that. So thanks JMP.
  We run that and you'll see it looks a lot like the distribution platform. So it's done a distribution of each of those rows, excuse me, columns, but with an added feature a new table here that shows the simulated power and because we simulate it.
  We can read these office sort of the estimated power if it weren't 100 if we were some other number, then you can look at the rejection rate. So we see here for our three mixture factors we. It looks like we have pretty good power, given everything that we have
  To detect those particular coefficients. If we go over here to the other three factors, things don't look as good
  So, then we'd have to go back and say, okay,
  Maybe we'll go back and see what what's the maximum value that I can detect, so I'm going to minimize these
  minimises table. I'll come back to my formula and say let's let's do a different
  Do something different here.
  What if I change this. So this was point five maybe know what if it were higher about one
  For the time. Let's see, let's let's also make it one
  And four pH. I'm going to go to three. So I'm going to bump things up a bit. So, you know, well hey can I detect this
  Will keep everything else the same because we know we can detect those, it looks like click Apply okay generated some new back
  Again, same thing. Right click under the column that you want to simulate quick simulate will switch in
  Given a certain number of samples. So stick it
  Same seed.
  And we'll go
  Just have to wait a few seconds for it to finish the simulation.
  There we are.
  And will run our power analysis again.
  Look to be the same here. We didn't change anything there. So in fact, I'm going to tie these groups. Little too much. Here we go. Let's hide these three
  Let's look at these. So we seem to have done better on pH so value of one might be the upper range of what we can detect given this sample size.
  But for temperature in time it seems we still can't detect, even those high values. So, okay. Um, what else could we change. What if we double the number of samples. I mean, we are
  calculating this for a sample size. So let's go back and one way we can do that. We can do go to do we, we can click augment design.
  will select all our factors.
  Select our response.
  Click OK.
  We'll just augment the design.
  And this time we'll double it will make it 24
  So I'll make the design.
  And it's going to take a little bit of time. So I'm actually going to
  A bit early.
  And let's see, we'll make the table.
  Okay, so now we've doubled the number of runs
  And
  So it only gave us half the responses. That's okay. Since we just need a response. I'm just going to take this and I'm going to copy
  And paste
  Course in real life. You wouldn't want to do that because hopefully get different responses. But again, we just need noise noisy response, go to the model. Now this time, we gotta fix things a little bit. I'm going to select these three go here under attributes say there are random effects.
  Keep everything the same. Click Run.
  I will notice I don't yet have my simulation formula, but rather than have to walk through and rebuild it. I can actually create a new column, go back to the old one.
  Right click Copy column properties.
  Come back, right click paste column copies my formula is now ready to go. So, let's say, What if we do it under this situation and we'll keep our values that we initially had
  So I'll go back. I'll double click this open up the fit model window.
  Go under the fixed effect tests, right click on there probably agree with p value simulate and
  I'm not going to change this, because there was only one simulation for we let the one I wanted and it found the right response.
  So I'll just change these
  Let's see what happens in this case.
  Alright.
  Run the power analysis. Now again, I'm not going to worry about these
  Mixture effects because as you can see, we just got better than what we had originally, which was already good. So I'm going to hide them again.
  So we can more easily see the ones were interested in this case pH. We knew we were going to probably do better on because even with the old 12 runs. We had pretty good power.
  It looks like we have definitely improved on temperature in time. So if those represents sort of the upper bound of effect sizes were interested in maybe a lower upper bound and this seems to indicate a doubling the sample size might help.
  So these are illustrate how we can use the one. First of all, how to do the one click Simulate
  And then how we can use it to do power calculation and encourages you to do something. I often did before I came to JMP, which is give people options explore your options. During the sample size seemed to help with temperature and time.
  Changing what you're looking for, seem to help with pH with pH and then the mixture effects we seem to be okay on so explore your
  So that can also include going back and changing the variances of maybe your random effect estimates.
  So for example, I could come back here. I won't do it. But I could change these values and say, you know what happens if the technicians were a bit noisier where the serum lots were less noisy. Try and find situations so that your test plan is more robust to unforeseen settings.
  Okay, so let me clean
  Go through close these all out.
  Alright.
  So for the remainder of the scenarios. I'm going to be exploring sort of different takes on how you can implement this. So the general approach is the same. You create your design you simulate a response.
  Us fit model, or in this case we're using a slightly different platform to generate a model.
  And then use that model to create a simulation formula which then you will then use and the one click Simulate approach.
  So now let's look at a case where we have a company that's going to conduct this case they have. But let's pretend that they are going to conduct a survey of their employees and they wanted to determine which factors influence employee attrition. So maybe
  They have a lot of employees that are going to be leaving. And so they want to conduct a survey to assess which factors and so they want to know how many responded, they should plan for
  Now the responses in years of the company, but their two little kinks. First, I'm an employee has to have worked at least a month before they leave for to be considered attrition.
  And the other is that the responses are given in years, but maybe we're more concerned about months. How many months. Maybe that's how our budgeting software works or something.
  And, you know, for employees, it might be easier for them to answer. And how many years have they been rather than how many years or months. They've been at the company.
  So in this case we have interval censoring because we're given how many years, but that only tells us that they've been there between that many years and a year later, we also have the situation if they leave before year where it will censored between a month and a year.
  So open up the stage table. I've set up a lot already. We've got a lot of factors here and scroll all the way to the end. So you can see the responses that we're looking at.
  So again, we have a year's low and the years high. So what this means is that if an employee were to respond that they left after six years. That means that their actual time there in terms of months, somewhere between six and seven years.
  If they left before a year than we know that they were there sometime between a month and a year.
  I'm going to click this dialog button here to launch interval censoring here. We'll use the generic platform. We're going to assume a wible distribution for the response.
  We don't put a censoring code here because we have interval censoring the way we handle that is we put in both response columns into the y
  Which you'll see. Okay. And here's all the factors which you'll see is when we click run, JMP a recognized as a time to event distribution and say, Okay, if you gave me two response columns. Does that mean you're doing interval sensory in this case. Yes, we are.
  So now.
  We're going to go through the same thing. We're going to find the right red triangle. In this case, it's here next to waibel maximum likelihood. Now here's the really nice thing about
  Generate platform. Now there's already a lot of nice things about it. But here's just some more icing on the top.
  When I click this, if we did like before we'd have to go in and we'd say save the prediction formula, we'd have to go and make some adjustments to get the random know make sure it's a random wible that's being simulated adjust things as needed.
  This is generally though.
  It is aware that you can do the one click Simulate and so it's saying, Hey, would you like me to actually save the simulation formula for you if you're, if that's what you're interested in and Yes we are. So we click the Save simulation formula.
  Let's go back to our table.
  And you'll notice it only simulated one calm. I'll talk a bit more about why in a moment. But let's real quick check will go in
  And there it is, in fact, I'll double click to pull up the scripting language, you'll see it's already got it set up as a random wible it's got the transformation of the model already in there.
  All you would have to do at this point is change these parameter values to what is scientifically significant to you.
  Okay, now for this purpose I won't do that. I'm just going to leave them be. I will make one change though, and I want to try and replicate.
  The actual situations that we're going to be using. Notice here. These are all continuous values when in actuality, what we should be getting our nice round hole year numbers. So the way I can do that.
  These are years. Hi, I'm going to create a simple variable make it equal to the actual continuous time but tell it to return the ceiling.
  So round up essentially
  ply. Okay. And there you have it.
  As you can see, this would tell me that I've simulated yours. Hi. Now,
  To
  See, when you do the one click simulations are all here. I'll open up the effect tests.
  If I right click and then click Simulate I could only enter one column at a time. So I can't drag and select more than one
  Now, if I were to just do this place the years I was yours. Hi simulation that looks okay. The problem is this year's low. Now this year's low is being brought in, because it was part of the original model.
  But it's the year's little that you originally used if we look back, we already see an issue, let me cancel out of this real quick.
  For example, if we were to do that. It wouldn't be able to fit this first one, because the years high is lower than yours. Low this year's low is not tied to the simulation response. So how do we fix that we need to tie it need to make that connection. So I'll go to yours, low
  I'm going to click formula. So there's already a formula here, I'm just going to make a quick change.
  I'm going to say if the simulation formula I double clicked to do that.
  Is double click one. So, for years, high as one return 112
  Otherwise, return the simulation value minus one.
  Now click OK and apply
  As you can see its proper its proper now it's tied to it.
  So now I can go back
  I can right click, do the simulate I can replace the years high with its simulation formula and be comfortable knowing that when I do the years low will be appropriate. It will always be one year lower unless it's already one year and then which cases 112
  So it's now tied to it, it'll always be brought in, when they do the simulation.
  I'll run a quick simulation real quick.
  There we go. It's going a bit slow. So that's a good sign.
  I'll let it finish out
  Alright. So there is our simulations.
  And of course we can run the power analysis, this case we've got a lot of factors that I believe there were 1400 70 quote to play this.
  For a lot of them were we have overkill.
  But surprisingly for some of them. We still have issues. And so that might be something worth investigating maybe we can't detect that low, the coefficient
  Might have to change something about these factors things to discuss in your planning meeting.
  So that's how you need to work things when you have this case we had interval censoring if he had right censoring so you had a censoring column.
  Same thing, you would. It would output a simulation on the actual time, I would say, you can make some adjustments to that.
  To ensure that it matches the type of time you you're seeing in your response or what you expect. And then you'll have to tie your censoring column to the simulation and this is going to happen whenever you have that type of setting.
  Okay.
  Let's clear all this out.
  So let's look at one other one
  What happens if we have a non normal response. So we've already seen one. We've seen a reliability type response. So we know we can use generating let's explore another one real quick. In this case, we have a normal response in
  A test.
  The system is going to be able to weapons flat for their responses, a percentage. Now, technically, you could model this as a normal distribution.
  And that might be fine, so long as you expect values between, you know, around the 50 percentage point
  But no, because we want this to be a very accurate weapons platform, we'd hope to see responses closer to 100%
  And so maybe something like a beta distribution response might be more appropriate. We do have one of the wrinkle. We have these three factors of interest, but one of them. The target is nested within fuse type. So the type of target factor will depend on the fuse type
  Case will run this real quick. Again, we've created our data.
  This case I simulated some random data and I did it so that it matches between zero and one. I did that simply by taking the logistic transformation of a random normal
  OK.
Caleb King I will copy
  Paste.
  Make sure I can paste
  And again, walk through it.
  Pretty simple.
  We're going to use the beta response. We have our response. We have our target nested within future type
  Click Run.
  And again, red triangle. Many say columns save simulation formula. And this is what you can do in the generate for the regular fit model unfortunately cannot do that.
  But we have our simulation formula. I'm not going to make any changes.
  But you could you could go in. As you can see the structure, double click is already there. Even the logistic transformation. So you just got to put in your model parameters.
  Excuse me.
Caleb King Quick. Okay.
  Bye. Okay. And again, we'll go down.
  And that's how you do that. So we go down.
  Effect tests, right click Simulate
  Make the substitution and go
  Alright, so see how easy it is, in general. So even if you have non normal responses.
  You're good to go. Thanks to generate
  Okay.
  Now,
  What if you have longitudinal data. This can be tricky, simply because now the responses might be correlated with one another. So how can we incorporate that well is straightforward.
  In this case, we have an example of a company that's producing a treatment for reducing cholesterol. Let's say it's treatment, a
  We're going to do run a study to compare it to a competitor treatment be in for the sake of completion will have a control and placebo group will have five subjects per group longitudinal aspect is that measurements are taken in the morning and afternoon once a month for three months.
  Now I'm not going to spend too much time on this because I just want to show you how you incorporate longitudinal aspect. So this case I've already
  Created a model created the simulation formula. So now you can use it as reference for how you might do this. Let's say we have an AR one model.
  And on this real quick.
  Just to show you. So there's all the fixed effects. Notice here we got a lot of interactions. Keep that in mind as I show you the formula might look a bit messy.
  You've
  Stated that we have a repeated structure. So I've selected AR one
  Period by days within subject. Okay. Under the next model platform.
  And so how do I incorporate that era one into my simulation formula I did it like this.
  If it's the first row or the new patient. That's what this means the current patient does not equal the previous patient
  This is the model that I saved I changed the parameter values to something that might be of interest. It did take a bit of work because there's a lot going on here. There's a lot of interactions happening.
  We've got some random noise at the end. But that's all I did. So I changed some values here. I made things a lot of zeros, just to make things easy
  If it's not the first row or if it's not a new patient. How do we incorporate correlation. All I do is copy that model up to here, added this term.
  Just some value. I believe it has to be less than one equal to one times previous entry.
  If it were auto regressive to then you would add something like lag.
  Sim formula to
  And you'd have to make another adjustment where know if it's the first row, we have our model. It's the second row or were two places into the new patient. It might look like an AR one if it's anything else we go back to
  So as you can see, very easy to incorporate auto correlation structures as long as you know what your model looks like it should be easy to implement it as a simulation formula.
  Okay.
Caleb King I'll let you look at that real quick.
  Finally,
  Our final scenario is a pass fail response, which is also very common. I'm going to use this to illustrate how you can use the one click Simulate to maybe change people's minds about how they run certain types of designs show you how powerful this can be
  Not intended
  Let's say we have we have a detection system that we're creating to detect radioactive substances. So we're going to compare it to another system that's maybe already out there in the field.
  So we're going to compare these two detection systems we've selected a certain amount of material and some test objects, ranging from very low concentrations at one to a concentration of five very high and we're going to test
  Our systems repeatedly on each concentration, a certain number of times and see how many times it successfully alarms.
  I'm going to open these both
  Let's start with this one. So this represents a typical design, you might see we have a balanced number of samples as each setting. In this case, we have a lot of samples. They're very fortunate that this place so
  Let's say we're going to do 32 balance trials at each run and these are, this is a simulated response. Okay, let's say. And then here I've created my simulation formula.
  So I'll show you what that looks like. Again, random binomial. They're all the same. So I've kept the number here, but I could have referenced the alarms in trials column stem from an indie consistent, but that's okay.
  Here's my model that maybe I'm interested in
  Okay.
  And here.
  I have a scenario where instead of a balanced number and each setting I have put most of my samples here at the middle
  My reasoning might be that will if it's a low concentration. I hardly expect it to catch it. I have reasonable expectations.
  And if it's a high concentration will it should almost always catch it. So where the difference is most important to me is there in the middle, maybe at three or four concentrations
  And so that's where I'm going to load. Most of my samples, and then I'll put a few more here. But then put the fewest at these other settings. Let's see how each of these test plans for forms in terms of power.
  So run the binomial model script here which will run the binomial model. There's only one model effect here the system. We don't put concentration because we know there's that there's an effect here. This is what we're interested in.
  Generate binomial.
  Run it okay again red triangle menu.
  I've already got my simulation formula.
  So actually I don't need to do that.
  So you already built up a pattern.
  Right click Do you simulate. Okay, everything looks good there.
  My next favorite random seed.
  Here we are power analysis. Okay, now let's go over here.
  Do the same thing. I'll fit the model and again when you have a binomial. You have to put in not only how many times it alarm, but out of how many trials.
  Run scroll down the effect tests, go down.
  In primary to get a hint of what's going to happen.
  Quick. Okay.
  Here's my simulations, get my parallels scooted over here, minimize minimize. So here's what you get under the balance design.
  Notice that we have very low power, which seems odd because we had 32 at each run. I mean, that's a lot of samples, I would have killed for that many samples where I previously worked
  So you would expect a lot of power, but there doesn't seem to be whereas here. I had the same total number of samples. I just allocated them differently.
  And my power level has gone up dramatically. Maybe if I stack even more here. Maybe if it did four and four and then edit for each of these
  I could get even more power to detect this difference. So not only does this show that you know it's not always just changing your sample size might not always need more samples in this case you had a lot of samples to begin with. But how you allocate them is also important to
  Okay.
  So,
  I hope you're as excited as I discovered this very awesome tool for calculating power.
  I'd like to leave you with some key takeaways.
  So again, we use simulation. Now, ideally, you know, we kind of like a formula. So, and in the civil cases we do kind of get the advantage of a nice simple formula.
  Even with the regression models, we kind of have formulas to help under, under the hood. But of course, and the real world. Things are a little more complex. And so we typically have to rely on simulation, which can be a very powerful tool as we've seen,
  Now, of course, one of the key things we have to do with simulation is balanced accuracy with efficiency. I usually ran 100
  Mainly because, you know, to save on time.
  But ultimately know maybe you might stick with the default of 2500 knowing that it will take some time to run
  So what I might advocate for is, you know, maybe start with 100 200 simulations at the beginning, just to give it give an idea of what's going on. And then if you find a situation
  Where it looks like it. No, it's worth more investigation bump up the number of samples, so you can increase your accuracy.
  OK, so maybe you start with a couple different situations run a few quick simulations and then narrow down to some key settings key scenarios and then you can increase the number of simulations to get more accuracy.
  I always argue power calculations, just like design of experiments is never one and done.
  You shouldn't just go to a calculator plug in some numbers and come back with a sample size. There's a lot that can happen in the design.
  Or what can that can happen in an experiment. And I think that the best way to plan an experiment is to try and account for different scenarios. So explore different levels of noise.
  In your response. So maybe the mixed effects play around different mixed effect sizes.
  Of course you can explore different sample sizes, but also explore maybe different types of models. So for example, in the universal center in case we use the wible model would if he had done a lot normal model.
  Explore these different scenarios and know presenting them to the test planners gives you a way to play in your study to be robust to a variety of settings.
  So never just go calculate and come back, always present tense players with different scenarios. It's the same process. I use when I
  Created actual designed actual experiments. So I would present the test players. I worked with different options they could know explore it. It may be they pick an option or it might be combination of options. You should always do that to make your plans more robust
  As I say, they're
  All right. Well, I hope you learned something new with this. If you have any questions you can reach out to me, they'll probably be providing my email address.
  So I hope you enjoyed this talk and I hope you enjoy the rest of the conference. Thank you.
Comments
ktbrickey

Awesome talk, @calking! This is super interesting and helpful. Thanks for sharing!

I have a question about the error estimates being used in these power calculations. Where exactly are they coming from? For example in the Simulation Formulas saved from GenReg, does the column formula contain an error term based on the randomly generated response data? In the real world, would you use historical error estimates in the Simulation Formula, if they were available?  

calking

Thanks @ktbrickey!!

 

For formulas saved from GenReg, I believe they are estimated from the data, as are all the other parameters used.

 

In general, I would say historical error estimates would be a first choice. You can always consider running a few power calculation with some slightly different values just to see how robust the number of samples is to deviations from the historical value. 

 

If that information is unavailable, you can also try to generate an estimate either through elicitation of subject matter expertise or by running a small pilot experiment beforehand. There's also the potential to estimate a value based on similar results and/or materials from the literature. 

 

Determining the parameters to use in the calculation is definitely one of the trickier aspects of the whole procedure. That's why I generally encourage people to run multiple calculations with different values so you don't run the risk of creating an expensive design based on what turns out to be poor choices. 

 

One last tip. A great way to make running the simulations easier is to use table variables. You can create them either from the red triangle menu on the data table or in the formula editor near the bottom. The advantage of the table variable is that you can change the value in the table (up near the top left) and it will automatically update any formulas using that variable as well. I discovered this after I made the video; otherwise, it would definitely have made an appearance :-). 

ktbrickey

Thanks @calking! I love the table variable tip - definitely will use that! 

Article Tags
Contributors