Many thanks for that - and it's almost exactly :-) what we're currently doing (GLM personality-Binomial distribution-Probit link) when we examine the data we generate only for whether the assay has worked or failed.
We're not there looking at any parameter other than whether the assay has worked or whether it hasn't.
We were though hoping to perform an analysis which is more discriminating.
We're striving to use a method for assessing how the assay degrades (without explicitly failing although failure may occur) as we increase the level of challenge.
So - we often obtain data of this form:
challenge units - successful curve fit - unsuccessful curve fit*
-------1------------------16---------------------00-----
-------2------------------16---------------------00-----
-------3------------------16---------------------00-----
-------4------------------16---------------------00-----
-------5------------------16---------------------00-----
-------6------------------16---------------------00-----
-------7------------------16---------------------00-----
If we were to fit a Log 3 Parameter model to each of the sigmoid curves in the 16 replicates x 7 conditions above - we discover that as the challenge increases from 1 to 7 - that there's a decrease in growth rate and a decrease in upper asymptote (ie in 2 of the 3 parameters generated by the L3P model).
So - as long as the assay works and a model fit is possible - with increasing challenge we see a no change in inflection point, a decrease in upper asymptote and a decrease in growth rate) which is great.
All of our problems arise when we're 'between' these 2 statistical techniques; by that I mean when a sporadic (or more) failures eg
challenge units - successful curve fit - unsuccessful curve fit*
-------1------------------16---------------------00-----
-------2------------------16---------------------00-----
-------3------------------15---------------------01-----
-------4------------------16---------------------00-----
-------5------------------15---------------------01-----
-------6------------------15---------------------01-----
-------7------------------14---------------------02-----
occur - as no Log 3 P model can be generated in these few (ie five in the table above) assays.
I have been wondering whether we could use the baseline value (ie close to zero) for the upper asymptote, 0 for the growth rate and 'missing data' for the inflection point in the case of failures ie in those 5 cases - such that we'd have data to take forwards in failed assays for use in (m)ANOVA-like tests of differences between challenge severity.
On the one hand it feels like a positive move to do this as no data is discarded, however I'm unsure whether I'm contravening rules of normality (required in (m)ANOVA-like tests) by assigning failed assays values of zero and then using them. Although we do sometimes obtain L3P fits to failed reactions (poor R2 values) - the 3P values returned do not make sense, are often extreme outliers - and so must be deleted if we're to pursue difference between group testing by using them.