I often think the term 'simple' is thrown around in these contexts as a way to not frighten timid phd students from engaging in statistics…..!
Ok- so
1.Correct. This is a pretty typical approach used in industry; the first timepoints are close-together, and then they get increasingly spread out as you go along.
Now, often a scoping study would be done to identify any inflection point in the trends and then the study time points would be designed such to increase the data density around that point. Sometimes though, especially at the start of a new project or development cycle you don't have that, so you're flying blind a bit (and tend to fall back on 'tried and tested' approaches).
We are unfortunately 'stuck' with the spread here, at least for this batch.
2.Previous batches had a generally linear drop, with a trend that didn’t really become apparent until later on.
3.Yes, that's an option- though doesn't solve my immediate problem (of needing to assign a preliminary expiry date for this batch)
4-This is the way I'm leaning I think. That sort of drop would be consistent with previous batches (it's usually pretty linear), just takes time.
5-yup again, though I only have 1 other batch in this 'set'- the trend was similar but 'ever so slightly down', allowing me to estimate a lower-level crossing.
6-We're agreeing a lot here- which is always nice (on the internet of all places!). What I think we're seeing here is assay variation superimposed on the (relatively) stable material. The variation-gods aligned with the previous batch to present me with a slightly negative trend which allowed a relatively straightforward stability prediction (with the obvious usual caveats). This time, I've somehow angered the variability gods and no such trend is offered, leading to my headache.
The noise in these data is wider than the previous batch too.
So- this is where my head is currently:
I have limited additional data to draw on here- the previous (and only other) batch gave a slightly negative trend, allowing me to assign an 18month initial stability at this (t3m) point. Obviously aware of the perils of over extrapolation here, but practical considerations must sometimes intrude.
This spread in this batch doesn't allow me to do that. So, the only option i currently see that would be defendable would be to revert to the previous batches stability on the basis that this new batch has not, as yet, offered me any indication that it is worse. As such, using the previous batches stability could be argued to be worst case, and therefore safe/defendable.
I then monitor and update as appropriate as new data comes in. Unless there's some clever way to use the added variability in this data to widen the Confidence intervals (and thus 'force' the prediction to cross the lower level)
Thoughts?
Jon