Late to the party, but there are a number of things that come to mind, though I can't say I understand what exactly you want out of the experiment.
But this sounds interesting...so if you'll indulge me I have some questions (of course if you are happy with the responses you have received, I understand):
What are responses are trying to understand? Response time? Rate of response (from detecting to prosecuting)?
What is a "target profile"? And why don't you care about the interaction between the design factors and the target profile?
Aren't you trying to choose design factors, that affect response time, robust to target profile?
It seems you're only interested in the mean? Why not the variation? It is impossible for me to think about means without some idea of the variance...it is like giving the score for one team and not the other in a sporting event.
If you are using the mean, shouldn't you first test to see if that is the appropriate statistic? What if there are unusual data points in the 8 target profiles? Then the mean might be a poor summary statistic.
You are replicating the design, why? Do you know what noise is changing between replicates? If so, wouldn't you be interested in knowing the effect of that noise and possibly noise-by-factor interactions (think robust design). Treat the replicate as a block and a fixed effect. If not, then I understand the use of the replicate to get an "unbiased" estimate of the MSE.
A further thought is to treat the design structure as the whole plot and the profile factor as the subplot of a split-plot design. I would think you would want to know more about this source based on the statement. "All 8 are subject to the same treatment of course, but they vary in other ways and it is a source of variance"
"All models are wrong, some are useful" G.E.P. Box