I'm designing an experiment with two factors of interest. I have enough time for 16 30-minute runs. Over the course of each 30-minute run, a team (acting as one unit) is exposed to 4 objects (or items), and each of these 4 objects yields measurable responses that comprise our dependent response variables. For sake of variety and realism, those 4 objects are randomly drawn from a set of 8 (we could make more)
I need some more information. There are 8 total possible objects? What are you actually measuring? Is it the same Y? Should the results for each object be the same? Do you know what the results of each object should be a priori? Can you normalize the measures (e.g., delta from target)?
One guess is that each measurement of an object is a repeat of the treatment combination (Not considered an independent event). You may be able to average and take the variance of the 4 measures (likely measurement error) for each treatment combination (so 2 Y's?). In this case, you have 3 total degrees of freedom and a model of Y = Factor1 + Factor2 + (Factor1)(Factor2).
Another Guess is if you randomize the 16 treatments/runs as you suggest (still only 4 treatment combinations) you may be able to consider the runs independent events. If so, then you have 15 DF: Y = Factor1 + Factor2 + (Factor1)(Factor2) + Error
In any case, the 4 objects relate to the dependent variable not the independent variables.
There are 8 total possible objects?
Yes, but those 8 are representative of a theoretically large population. They are objects that differ in a number of characteristics that we don't care about.
What are you actually measuring? Is it the same Y?
We have several dependent variables that will come from each run. For example, timeToDoActionX
Should the results for each object be the same?
Do you know what the results of each object should be a priori?
Can you normalize the measures (e.g., delta from target)?
Difficult to give specific advice on your specific situation, but I can think of two options to handle the object-to-object variation:
1. If you have hypotheses about how objects could affect the dependent variables, then you may be able to assign those effects....something like the idea of blocking. In this type of scenario, you would want to "control" which objects are associated with each treatment combination, not completely randomize.
2. If you lack hypotheses, then randomize the objects for each treatment and use those replicates to estimate experimental error.
Thanks @statman ...I wonder if there any JMP training data files at are analogous to this situation. More specifics to follow.
This is a human-in-the-loop experiment in a lab environment where we can control the factors of interest and are injecting some varying scenarios for operational realism. The various objects are varying target profiles presented to the participant team during a 30-minute run.
Each of several (say, 4) scenario objects within a run is a target that the team tracks for some percentage of that 30 minute time. That's a key response variable - the percentage of time an object is tracked. Some factor levels (with better tracking capabilities) presumably lead to better team performance -- i.e., higher percentages of tracking time.
Within any one of those 16 30-minute runs, the two main factors are fixed for that 30 minute run. Since there are 2x2 = 4 factor level combinations in a factorial design, we have 4 replications of each combination. That's all fine. But it's confusing to me how to properly treat (in JMP, specifically) those scenarios that the operator team is exposed to in each run. Those scenario profiles/objects vary in many ways -- type, speed, altitude, etc -- a combination of categorical and numeric characteristics that are of no particular interest other than to generate variety. They are randomly assigned to each run. If there are a finite number of these targets, do I treat them like a categorical covariate? I need some advice that refers directly to the JMP DOE dialogs if possible.
With each response I am getting a better idea of what you are attempting to do. Let me make sure, you are not interested in understanding "Noise" (type, speed, altitude, etc) exhibited by your multiple objects. I personally think you could take great advantage of determining:
1. What are the design factor and interaction effects
2. Are those effects consistent over changing noise conditions
3. Are there any interactions with specific noise variables.
It may not be very useful to say "on average these are the design factor effects" if those effects depend on object variations.
Again you have multiple options. I'm not sure I would call the different object effects "covariates", but you could treat them as blocks (e.g., Perhaps random blocks). If you want to understand the effects listed above, you could go the split-plot direction.
If you want to create an hypothetical data table, I could suggest some analysis steps.
Well, not sure how else to describe what we're trying to do. The two factors of interest are categorical, as suggested by their low/high designations, and the responses are typically percentages (track time as a percentage of total time, or successes as a percentage of opportunity).
Sorry, but it would be much more efficient to discuss...In any case, I have attached a hypothetical data set with 4 random Blocks of size 4. I have included 2 responses: Track time and % of total (You can use actual times as the total time is fixed). Here are 3 ways to do this analysis, all using Fit Model Platform.
1. JMP default (REML). Analyze>Fit Model (I would add the 2nd order effect to this)
2. Write your own and include the block-by-factor interactions. This is a saturated model, remove terms (using normal and Pareto plots) to get the appropriate model
3. Write your own model. Include just the block and the factorial of factor effects. This will give you typical ANOVA output
These models are in the HypoJournal attached.
Thanks @statman , I think you're probably onto something with the application of blocking. It's probably appropriate here. Supposing that the 8 different target profiles (T1, T2,...,T8) etc in the grid in the original post) really have some distinguishing characteristics, such as speed (fast vs. slow), or size (big vs. little). Even though I don't care about that characteristic's effect on the response, I could block for it if I am concerned it is introducing unwanted noise. I think my confusion about blocking stems from my preconception of blocking and its typical use for 'hard to change' factors -- it implies (to me) that we are constrained by a need to do a bunch of runs where we used only the T1 targets, then a bunch of runs where we use only the T4 targets, etc. -- as if 'target type' was a hard to change factor. I am not constrained in that way, but I gather I can block for target type even if it's not kept constant for a range of runs like that. Do I have that right? Sorry this has turned into a DOE lesson for me rather than a JMP-specific question, but I'm trying to reconcile my situation with the many examples for blocking that I come across.
There are no labels assigned to this post.