Choose Language Hide Translation Bar
RAndrews4
Level I

47 Years (So Far) with Designed Experiments from One-Factor-at-a-Time Designs to GO-SSD Designs (2021-US-30MP-809)

Contributed Paper On Demand Winner

 

Level: Beginner

 

Ronald Andrews, Sole Proprietor, Andrews Practical Stats

 

Like most young engineers at Kodak, I started out running one-factor-at-a-time designs, until we needed to optimize a five-component solution. After successfully optimizing this solution that had two statistically significant three-way interactions, I was hooked on designed experiments. As computer tools progressed, we moved from pencil and paper to in-house mainframe programs. In the 1980s we started using mainframe SAS. In the 1990s we had JMP, but our DOEs were still basically catalog designs. In the late 1990s when JMP added optimal designs, we had some new tools. Experiments didn’t have to be orthogonal. We could run smaller experiments and still get the necessary data, which was very important in the waning days of the Kodak Research Labs when strict quotas were placed on the number and size of experiments. When Bradley Jones and Christopher Nachtsheim introduced definitive screening experiments in 2011, I had moved on to Bausch + Lomb. We have had more than one case where DSDs identified antagonistic interactions early enough to avoid program delays. DOEs in retirement are simpler, focusing on things like optimizing an ice cream recipe and reducing the amount of time to produce maple syrup on a conventional stove.

 

 

Auto-generated transcript...

 

Speaker

Transcript

Ronald Andrews Hello, Good Afternoon. My name is Ron Andrews. I'll be talking today about design of experiments.
  So the title is 47 Years (so far) with Designed Experiments, everything from one factor to time up to a group orthogonal super saturated designs. A little contact information is down at the bottom.
  Don't hesitate to contact me at the email address listed here for anything related to this presentation, or even tangentially related.
  Let's get the presentation mode going.
  Topics. I mentioned one factor at a time and group orthogonal.
  I'll also mention an early factorial I ran. Yates algorithm is an historical oddity. Optimal designs, hugely important.
  Definitive screening designs, along with group orthogonal designs, are also important in screening situations.
  When I first got to Kodak, I didn't know anything about experimental design. All my experiments were one factor at a time.
  There's one particular one that has kind of a colorful display here that I can show. I was working in an area where we design new processes to develop color film and also help maintain them. Color film is what came before digital; some of you may remember that.
  Occasionally, in the processing lab, there was something that could happen that would dilute the color developer.
  The question was how far can we go before we have to dump it and start over again? We thought maybe 5%, maybe 10%. We had reports of people going farther than that and claiming
  acceptable results. We decided to do an extreme test. We started at normal strength and went all the way down to 25% of normal concentrations.
  I need to describe briefly how color films are evaluated. We expose a grayscale step tablet on the film and after processing,
  we use a densitometer to read the optical densities with three different filters, so we get reading for each of the three color records,
  We plot the optical density versus the log exposure. Now this is a reversal film; it's a positive image, so the high exposure has low densities, and vice versa.
  This is the experiment and its results. With 100% concentration, we get nice parallel curves and we get a good color balance.
  At 75% that's a pretty healthy dilution. The blue and green densities are going up. That was puzzling at first. Why should densities go up
  When we're diluting the step that forms the dyes?
  Well, we looked a little closer and realized there's a competing coupler in that solution that forms a water-soluble dye that doesn't end up in the film.
  It's in there to improve the image structure but it's also the most concentration sensitive component in the solution.
  So when we first start diluting that's what we see first. We see the impact of diluting the competing coupler and we get higher densities, especially in the green and blue records. The color balance has shifted.
  We get to 50%. Green and blue are still going up. Red is starting to bend over in the upper scale.
  The color developer has to go through the blue and green records first, and by the time it gets down to the red record here in the upper scale where it's trying to develop maximum amounts of dye,
  there's not enough color developing agent left to form all the dye that it should.
  Then we get over to the 25%, so it's 25% of normal concentrations. The red record is really taking a nosedive with
  what should be nearly black have turned bright red, kind of a psychedelic effect here. We looked at this and thought, that's kind of interesting. There might be some people who want to do this on purpose, maybe we should publish this method.
  It was the '70s.
  There are a lot of people thinking things like that. I don't believe we ever published it. Anybody who's seen this recognizes the effect.
  First factorial I ever ran was with a project involving Super 8 movie film.
  Now Super 8 was originally home movies, which came before camcorders, which came before digital video, which is what we use on our smartphones today.
  We wanted to introduce Super 8 movies to a more professional clientele. Part of the project was to produce a longer length of film would last 10 minutes at the professional frame rate.
  So we have this big cartridge that has some large reels inside it with plastic parts rubbing against each other, which generates static.
  A little bit of static generates a little bit of dirt.
  More static discharges and leaves little blue lightning bolts on film. Not good.
  We tried lots of methods to eliminate the static. None were completely successful.
  What we did eventually, was put a UV absorbing layer on top of the film to make the film blind to static. Nearly all of the energy in a static spark is in the UV region.
  Now this worked to make the film blind to static, but it also slowed down the removal of silver.
  After the color developer step, where we form the dyes, we have a bleach-fix solution that removes the silver, so we can see the nice clear color images.
  My job was to optimize this bleach-fix solution.
  There were five components, six if you want to count the water. Coming from the chemists, "We think there are a lot of interactions."
  That was a masterpiece of understatement.
  A brief note on my background. I graduated from a highly respected engineering college with no background in statistics.
  I got to Kodak knowing what an F stop was but I had no idea what an F test was.
  Fortunately, the company had a management services division, with a bunch of statistical experts.
  So I made an appointment with John Lynch, who was one of the DOE experts, and we got together and talked about it and agreed on a two to the fifth central composite.
  A full two to the fifth factorial sounds extravagant, but we didn't know which interactions were going to be significant and we could run these eight at a time, so we could finish this in a day. It was a big job but it was within our capacity.
  Some notes on logistics in 1974.
  The design came from a catalog.
  The design units were translated into physical units with my slide rule. Now the HP 35 calculator existed, but in today's dollars, it would cost 2,000 of them and I didn't have one.
  I wrote up the instructions by hand and delivered a carbon paper copy to the lab. They processed film samples through all 46 different solutions.
  They gave me back the film samples, so I took them to a different lab to be analyzed for silver.
  I punched the results into IBM cards.
  These used to be the mainstay for data processing, some of you have seen them.
  Then I submitted the deck into a bin for an overnight run on the mainframe.
  I got the paper print out the next morning: tables and line printer graphs. If you want to curves, we got our colored pencils and a French curve and drew them in.
  A bit different from what we use today.
  The results,
  such as I could remember them many years after the fact. They're by no means accurate, but they
  are directionally correct.
  Four of the five ingredients had big main effects, and four of the five ingredients also had significant quadratic effects. Then, to make things really interesting, we have some three-way interactions. Two of them actually. Bleaching agent, fixing agent, and bleach accelerator.
  P value is not zero, but it was less than .0001.
  Down here at the bottom, the fixing agent, bleach accelerator, and the grain cracker. P value of .055. wouldn't always make the cut.
  The chemists looked at this and said, there is a compound that's formed by the addition of these three ingredients. It may be the reason why you're getting the results you're seeing. We left it in the model and subsequent results bore it out. It was a real three-way interaction.
  I've since learned how rare this is, as in 47 years, it hasn't happened again to get two three-way interactions in the same experiment.
  Next step was optimization. These are in design units, so the original formula is zeros down the line. That left 1,400 milligrams of silver per square meter,
  roughly one-third of the original amount coated in the film.
  It's way too much. Project this film and you see big black dots all over the pictures.
  With our optimized formula, we went up in three of the ingredients and down in one of them.
  We left the preservative alone; it had no effect. Got down to 22 milligrams of silver.
  That's not zero, but you can't see it.
  And the name of the game in photographic work is that if you can't see the difference, it doesn't make a difference.
  Just to be safe, we extended the soak time a bit.
  The personal impact of this experiment was huge. I could have spent months trying to optimize this solution without getting the results I got out of this one experiment, so I was hooked.
  I looked up John Lynch's training schedule and signed up for his next available DOE class.
  I signed up for the advanced class and looked for other chances to learn more about DOE. I'm still looking for more chances to learn about design of experiments.
  Discovery Summits have been a big part of that in more recent years.
  And oddity from the '70s.
  I started doing more factorial experiments, not as big as the one I just showed, frequently something like a two to the third or the fourth.
  For experiments of that size, I didn't bother to punch the results and submit to the mainframe. I used Yates algorithm, pencil and paper, and I could calculate the regression coefficients in 15 or 20 minutes.
  You don't need to know this. It's a historical oddity. If you're interested look it up, but in today's world, you don't need to know about Yates algorithm.
  In the 1980s,
  we got mainframe SAS. Designs still came from catalogs but we had PROC GLM to analyze the experiments.
  Then we got an internally written DOE generator.
  It still got designs from catalogs, but was computerized, integrated into same suite of programs with the SAS analysis and later with a response optimizer.
  And then multiple responses,
  which is important with photographic film, where there are about 150 things we needed to test and make sure we get right.
  DOEs got bigger.
  In the '90s
  JMP arrived. Now I know Version 1, it was introduced in 1989, but as far as I know, it didn't get to Kodak until 1990, so that's what I count.
  We got a program called Curves, internally written, it analyzed those characteristic curves I showed before as eigenvectors and I'm sure there are several people here who could show us all how to do this with JMP now.
  Curves was paired with a formula on demand computer system that operated an experimental coating system.
  Now traditionally,
  to make variations in a photographic film,
  there are about a dozen layers on most color film products, usually three passes through coating machines.
  To make changes, we stopped the coating, change the pot, repurged the lines and restart the coating.
  With formula on demand, for the layers that we wanted to produce variations in, we would split the components into different parts, mix them on the fly, and then we could make changes by changing pump rates.
  We didn't have to stop and then repurge the lines and restart the coating. We could get many more parts in our coating in the same amount of time.
  This led to some larger experiments. The biggest one I ever did was a two to the 17 minus 11 central composite.
  107 factor combinations including some replicates all run in one day, using the FOD system.
  This provided a fairly complete model of the red record, a new product under development; we had other experiments to fill in the green and blue records.
  That sounds big but a colleague ran a two to the 17 minus nine central composite. He covered all three color records, 300 factor combinations.
  Took him six weeks and $50,000 worth of mainframe time to analyze it, but compared to the six months and close to a half million dollars in experimental work, this is a bargain.
  2000s came and Kodak
  incurred a reversal of fortune. FOD was dismantled; it couldn't support the dedicated crew anymore.
  Support for the Curves program was removed. It was still there, if you knew how to run it. If you didn't, maybe you could find somebody to help.
  Fortunately JMP came along with a DOE function that took over a lot of the DOE design and analysis work.
  Courtesy of management, we had limits on the number and size of experiments.
  In 1999
  I ran 71 coating experiments and in 2000, I was limited to 25 for the entire year with no more than 12 parts in each one. No more big factorials.
  Efficiency was at a premium, so optimal the experiments that came along at this time; were extremely valuable.
  We found we didn't have to have perfectly orthogonal balanced experiments. These are good features, but they're not absolutely necessary, so we packed those experiments with every factor we could get in there
  to get as much information as possible.
  We also engaged in some
  piggybacking. I was working on a project to replace the fluoro-surfactant in the simultaneous overcoat.
  That affected the physical properties of the top of the film.
  It didn't affect the photographic properties. I could change the photographic layers and it had no effect on the physical properties of the surface, so I could run two completely separate experiments on the same set of coating parts. It was completely confounded but we could test them separately.
  Not recommended unless you really understand your process. Kodak at that point had 120 years of experience with experimental coatings.
  We knew directionally what was likely to happen. Our experiments were largely calibration; how big was the effect going to be.
  2005 came.
  Kodak said it was time to go, so I found my way over to an older company also founded in Rochester, namely Bausch and Lomb.
  Now this clip I've used before: Discover the unexpected. It's more important than confirming the known. I firmly believe that.
  But in a regulated industry, confirming the known is often required.
  The FDA requires us to confirm every significant process parameter at its spec limits. When you're running at the spec limits and sometimes they have to run all...maybe not all but most...combinations, of the spec limits and make sure the product is acceptable.
  But we still needed efficiency. We still needed efficient experiments, so we made use of those optimal experiments. Some other differences. At Kodak, there was help I could call on, there were some JMP experts there. At B&L, initially, I was the only one in our process engineering group.
  I got some more recruits and they started coming to me for help, so I had to learn the software better.
  I quickly learned more about the help facilities within the program and available by phone and by email, and I sincerely appreciate the people on the receiving end of those emails and phone calls.
  I also managed to leverage my position into attendance at more than one Discovery Summit,
  easily the best technical conferences I've ever attended. I learned a lot. I usually focused on DOEs, but I learned a lot in addition to just DOEs. Many different things that were worthwhile there.
  Some of the things, some of the highlights during my years at B&L, I once demonstrated a four-way interaction...kinda sorta.
  There was a contact lens under development that had a unique defect that could be cured four different ways.
  If we piled the four factors into one factorial, the math shows up as a four-way interaction. Once you cure it, you can't cure it again with the other factors. It was not strictly speaking a physical chemical four-way interaction, but mathematically it was.
  Only four-way interaction I ever encountered.
  I will always remember the 2011
  Discovery Summit in Denver. It was the first one I attended and that's when Bradley Jones and Chris Nachtshiem introduced definitive screening designs.
  I looked at that and knew immediately what I was going to use it for. I needed to verify spec limits, the weigh-out specs on all the components that go into the monomer mix had to be verified.
  I use it a DSD for that purpose. Not only was it easy to verify all the spec limits and combinations of the spec limits,
  but I had a model that I could use to do some what-if experiments about what if we tighten our specs, what's likely to happen?
  And we did some of that, not because we needed to, but because we could...we had the capability to tighten the specs and we could benefit from it.
  I did some more piggyback experiments.
  Last year I was working on a project
  involving a new daily disposable contact lens.
  Daily disposable lenses don't sell for anywhere near what a 30-day lens sell for, but the manufacturing cost is not that much lower.
  We make it up in volume, but still the profit margins are squeezed. So we were spending a fair amount of time on the cost aspects of the program. A colleague was working on the the yield at the demolding step.
  I was trying to reduce the amount of solvent used at the extraction step, where we wash the unreactive monomers out of the lenses.
  We had three different piggyback experiments, anywhere from 50 to 75 parts where my colleague ran yield experiments, I ran extraction experiments.
  I needed 300,000 lenses for each one of these, in order to verify that my extraction changes were going to be safe and effective.
  The only way I got that much time on the production equipment where we could run this kind of test, was by sharing time with other experiments. There again it was confounded, but we could test separately and get different parameters and both get use out of the same experiment.
  We did get our cost cutting changes validated in time for product launch, and the product is on the market now: Infuse contact lenses, if anybody's in the market for such a thing.
  Now I mentioned group orthogonal super saturated designs early on.
  When I first heard about these in a Discovery Summit, I think, was three years ago,
  I was kind of like a kid with a hammer looking for a nail. I thought, you know, that's another great screening design. I gotta try this out.
  I've run a lot of simulated experiments. I've demonstrated the advantages; I've also demonstrated pitfalls if you have too many significant effects.
  I haven't actually run one yet. So far, in screening situations, I compared this to DSD and I decided in the cases I'd come up with so far, the opportunity to estimate some second order effects was worth more than the smaller experiment I could get with a GO-SSD.
  So some DOE recommendations. They're not my recommendations. I'm parroting the experts here.
  First of all, take those catalogs and put them in the circular file. You don't need them.
  For screening situations consider both DSD and GO-SSD.
  Figure out which one is best for your situation.
  For most experiments, including some screening situations, an optimal design is going to be the best choice.
  Now, in terms of which optimal parameter.
  If you are concerned, look up some of the presentations Bradley Jones has given at past Discovery Summits and you'll get a great
  presentation with a lot of detail.
  If you don't want to worry about it, don't worry about it. The advantages of optimal designs are large compared to the small differences between the various optimal parameters.
  You're not going to be in bad shape just accepting the defaults.
  Finally, always, always, always use the design evaluation tools before you start running the experiment.
  I always calculate the power, I always look at the correlation heat map.
  If it's a crucial experiment, I will also
  run a number of simulations with different levels of noise, and make sure that I have a really good chance of demonstrating...detecting the kind of effects that I need to be able to detect.
  Borrowing a line from King George in Hamilton: "What comes next?"
  Well, I retired from Bausch & Lomb in January so I'm no longer getting paid as an engineer.
  Still, an engineer at heart.
  I've been working on developing my lifelong photography hobby as a business.
  A ew weeks back, I ran a three by three factorial.
  I was concerned about the performance of
  a new wide angle lens that I had purchased. I was not satisfied, purchased another one.
  I compared three wide angle lenses including an older one. I used three different apertures and looked at focus uniformity as a response and I found out there's absolutely nothing wrong with the lenses. The problem was with the photographer. I needed to do a better job of focusing.
  Important and useful information.
  I expect to stay involved with data analysis. I've done some data analysis of COVID trends. I look at
  water flow data in streams and rivers
  so I know when certain waterfalls around are going to be in good shape. I also analyze some demographic data for our church.
  I'm probably not going to get paid for much of this work. I'm open to the idea of consulting but not actively pursuing it, so not likely to happen.
  So this is a whirlwind tour through my 47 years as an engineer working on various different kinds of experiments.
  Be glad to entertain any questions. Thank you very much.
Comments

Thanks for sharing your story and advice!

Excellent story of your DOE history. 

tpnicholas

Ron, 

Enjoyed your presentation, brings back old memories of mainframe SAS. Thanks for sharing.     

dctrindade

Thanks for an interesting presentation that brought back memories of the old days of analyzing data with pencil and paper, punch cards, and manual plotting with French curves.