cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
SVEM (Self-Validated Ensemble Modeling): A Path Toward DOEs with Complex Models and Very Few Runs (2022-EU-30MP-945)

Simon Stelzig, Head of R&D Data-Driven Development, Lohmann

 

In industrial development, especially if experiments are conducted on production scale, the number of DOE runs required to cover the actual problem is always too high, either in terms of costs or time. At the 2021 Discovery Summit, the introduction of SVEM (self-validated ensemble modeling) caught my attention, due to its power to build accurate and complex models with a limited number of experiments. Especially in the area of costly experiments, SVEM opens a way to fit complex models in DOEs with considerably reduced amount of runs, compared to classical designs.

 

Three case studies are presented. The first two case studies deal with designs conducted on a production scale, first on a five-factor, 13-run design and then on a four-factor, 10-run design, each with nonlinear problems. Another use case shows a Bayesian I-optimal 15-factor, 23-run design for a nonlinear problem. Especially within the first use case, the excellent predictive accuracy of the models obtained by SVEM led to the discovery of faulty measurement equipment, as measurement results started to deviate from the predicted results. I'm convinced that SVEM has the potential to effectively change the way DOE will be applied in product development.

 

 

Hello, and welcome, everybody, to my today's presentation.

Thank you for joining.

I will give you a short talk

about the concept of SVEM, or self- validated ensemble modeling,

and how this concept can be a path towards

DOEs with complex models and very few runs.

I hope I can show you that on some use cases, that this concept is actually

the solution towards these DOEs with complex models and very few runs.

But before I start going into detail of my presentation,

I want to give you a short presentation of the company I'm working for.

It's Lohmann.

Lohmann is the producer of adhesive tapes,

so mainly pressure- sensitive adhesive tapes, but also structural adhesive tapes.

The Lohmann Tape Group made a turnover in 2020 of about €670 million.

It consists of two parts.

It's the Lohmann T ape Group, that's who I'm working for,

and the joint venture, Lohmann & Rauscher .

The Tape Group had a turnover in 2020 of about €300 billion,

and last year, we celebrated our 170th birthday.

Basically, the Tape Group is divided in two major parts.

That's the technical products, where I'm working for.

We offer mainly double- sided adhesive tapes,

pressure-sensitive adhesive tapes,

and structural adhesive tapes.

We are solely B2 B company.

And the other part is our hygiene brand.

Ninety percent of our products are customized products.

We are active worldwide.

We have 24 sites around the globe and about 1,800 employees.

Who are we?

We are the bonding engineers, and as I mentioned before,

90 percent of our products are customized,

so our main goal is to offer our customers the best possible solution,

or to offer him a customized solution very specific for his problem

or to solve his problem.

In order to do that, we have a quite large toolbox

in order to do that, so that consists of a...

We are able to do our own base polymers for pressure- sensitive adhesive tapes.

We can do our own formulation

to formulate either pressure- sensitive or structural adhesive.

We can do our own coating that needs to form the final adhesive tape.

But we can also do all the lamination, die cutting, processing,

doing spools, rolls, so offering him the adhesive solution

in the form he actually needs or he actually wants,

in order to solve or to best satisfy his needs.

But we can also do all the testing

and also help the customer in integrating our adhesive solution into his process.

Having this quite large toolbox

or these many tools in order to fulfill the customer's needs

also comes together with a lot of degrees of freedom in order to do that.

That means degrees of freedom or this huge amount of degrees of freedom,

the large value change which we have and dealing with that complexity.

One of the solution to deal with this complexity for us

is to use DOE, or design of experiments,

for us in the development department

mainly for the polymerization, formulation, or coating.

That brings me back to the topic of my talk.

Since we use our DOE to tackle this problem

or to tackle this l arge complexity, and being able to use all the tools

in the best possible and most efficient way to fulfill our customer's demand,

we use DOE, but let's say we want to have that as efficient as possible

to tackle complex problems with the lowest amount of experimental effort.

There, I think, where SVEM as a concept comes in.

As you all know,

during the process development or product development,

you run over the same stages.

We start in the lab, doing our experiment on a laboratory scale

and switch to the pilot scale, then final scale of step

when going to the production in order to produce the final product.

Doing this way, let's say, the effort per experiment,

might be time or cost, dramatically increases.

In order to minimize these costs,

we use DOE in order to minimize the experimental effort.

But also, if you go the steps from the lab over the pilot to the production,

the higher the effort per experiment is,

the more critical are the number of experiments.

Situations where the number of experiments is critical

might be, let's say for us, but also let's apply for other industries,

is if you have to do experiments on a pilot or a production scale.

Or even if you do experiments on a laboratory scale,

if you have, for example, long- running experiments over the year,

let's say the analysis of an experiment takes very long,

that might be a situation where the number of experiments is very critical.

But also in combination with that,

if you have complex models or complex problems

which you have to address,

if you, for example, need a full RSM model or if you want to do an optimization,

you will run into this problem where you always might have

a large amount of experiments in order to model this complex problem.

The best situation would be that you can generate DOEs,

which allows you to trace a very complex problem

or to apply complex models,

but at the same time, keep the number of experiments as low as possible.

Just to give you an example in our case, the production of adhesive tapes.

If I do experiments on a laboratory scale,

I need less than a kilogram of adhesive per experiment,

and I definitely don't need more than one or two square meters of adhesive tape

in order to do all the full analysis which I want.

If I then move to the pilot scale, depending on the experiment we want to do,

you might need between 10 or 50 kilogram of adhesive,

and your cost maybe 25- 100 square meters of adhesive tape per experiment.

I f you go even further into the production scale,

you might need even more than 100 kilogram per experiment of a dhesive

and you r cost maybe 1,000 square meter of adhesive tape per experiment.

But at the same time, I still only need about one square meter

or two square meter to do the full analysis of my experiment.

If you are in an unfortunate situation,

I have to waste 99.9 percent of my product

is basically waste.

That's a lot of material used for one experiment,

and it also comes along with quite an enormous amount of cost per experiment.

Just to give you an illustration,

that's a picture of our current pilot line.

T hat's the size of a door.

Even for a pilot line, it's quite large,

so you can imagine the amount of material you need is quite large.

And that's numbers per experiment.

But even on a laboratory scale,

you might run into the situations where the number of runs is critical.

It's either if you have complex models or large amount of factors,

and I've shown you that in the last use case.

Because it's a chemical reaction where we want to vary the ingredients,

but also the process parameters

if you have long- running experiments where experiments run for more than one day,

or the analysis of the experiment takes very long,

and at the same time, you have very limited project time or project budget.

In all these situations, it is very desirable to minimize

the experimental effort as much as possible,

so to decrease the number of runs as much as possible.

In 2021, last year's Discovery Summit, I came across two presentations.

One by Ramsey, Gottwalt, and Lemkus, and the other one by Kay and Hersh,

talking about the concept of SVEMs or self- validated ensemble modeling,

and for me that raised the question of, "Can that concept be actually the solution

for DOEs with few experiments, but at the same time,

keep the complexity of your model for the problem which you have?"

With that said, I want to switch over now to my use cases

in order to hopefully show you that this concept is actually

or might be actually a solution to that problem.

I want to switch over to JMP.

The first example I want to show you today

is where we had to do a design of experiments

on an actual production scale, on production equipment.

The second one is done on a pilot plant, and the third one was done on the lab.

I have some information for you where the first two examples were done

or the design was created without knowing the concept of SVEM,

but the analysis was done knowing then the concept of SVEM.

The third example wasn't actually done after I knew about the concept of SVEM,

and then I applied or I designed the DOE

specifically knowing about the analysis concept of SVEM.

So I'll go to the first example.

In the first example,

we want to do a product development by means of a process optimization.

We could only do that process optimization on the actual production equipment,

so we had to do all the experiments on a production scale.

If you imagine though, you always have to find production slots,

and you have to always compete with the actual production orders,

so your normal daily operation of business.

So the number of runs which you can do on actual production equipment

is always very limited and it's always very costly,

and you always have to fit in the normal production.

What you want to do at that point, for the first example,

we had five continuous process factors, we know we expected a nonlinear behavior,

so we knew we had to use

the full quadratic models and the expected interactions,

and we also know that we had a maximum of 50 run

due to limited production time, time, and money.

What we came up?

Again, as I've told before,

that was done before I knew about the concept of SVEM.

When we created the design, we put all the linear and quadratic effects

that we need necessary in the custom design platform

and we set the interactions to optional, and that's the 15 runs which we ended up.

And then we started with the experiments.

Unfortunately, only 12 runs could be accomplished

because we had limited time and capacity, and we were running out of time

in order to do the last three remaining runs.

After 12 runs, we basically had to stop and then do the analysis,

but fortunately for us, we could do the remaining three experiments

after the analysis was done.

That brought us into the position where these extra three experiments

which were conducted after the analysis

was an actually test set where we could check the validity of our model,

which we created using the concept of SVEM.

The actual data analysis,

it's shown in these two presentations from last year's Discovery Summit,

and I put in the code so you can find it very easily on the community page.

The data was analyzed using SVEM,

so I did 100 iterations.

I used a full quadratic model,

used all the interactions possible with these continuous process factors,

and I used the LASSO as the estimation method.

I just put in for the script how it's done this analysis using SVEM,

but also, I would want to refer to these two presentations

where it's all written down or explained in great detail.

Coming to the results, I want to show you the profiler,

and what you can actually see that we actually need.

Second order terms, so we have a quadratic dependency.

But we also had interactions.

So everything what we expected was basically present.

But let's go ahead and check for the validity.

You might say, "Okay, you just over fitted the model.

You just used too much model terms. It's too complex.

That's why you have second order terms. That's why you have interactions."

Well, let's look at the predicted versus actual plot.

For that example,

we're in the good position where we had actually a test set,

so these three remaining experiment which the model has never seen,

and the 12 , let's say, that's the red dots.

A s you can see here though, if you watch it,

the predicted versus actual plot, that's not too bad.

T he prediction of these three remaining ones is pretty good.

If I check and see if, for example, another response,

the prediction is very, very accurate.

A s I mentioned before, these runs, the model has never seen.

For us, that's what's quite amazing.

To have only 12 runs may have crippled this sort of experiment

because three of the runs were missing,

and still having this very good predictive capability of that method SVEM.

So in general, prediction was very good.

We could predict the remaining three runs very accurately except one response.

Everything fitted except one response, it didn't fit at all.

We thought, "That can't be.

We can't just predict 10 of the 11 responses almost perfectly,

and the 11th one doesn't fit."

And we thought, "Okay, something has to be wrong.

Our model is good because it fitted 10 times,

so it has to fit also the 11th time."

And we thought, "Okay, something with the measurement has to be wrong,

because the experiment can't be, because the other responses, they fit.

Our prediction was very good."

We said to the expert, "S omething in the measurement had to be wrong."

So our expert digged a little bit deeper into the measurement equipment,

and they actually found deviations in the measurement.

T he measurements on that last response

was a little bit different than the first 12 runs .

A fter the deviations were corrected, the prediction was again very good.

That means that this analysis or the prediction was so good

that we even found deviations in the measurements.

From that first example, I would say SVEM works perfectly,

and gives you very great insight in a very good and very accurate model

with a very limited amount of experimental effort.

But you could say , "Okay, you could have also used a different screening design

and you would have, for example, 11 runs,

and you would have got exactly the same result of that."

Might be true.

But then I refer to the third example.

Second example is very similar.

We wanted to do DOE on a pilot plant, and we had four continuous factors.

Two of those factors required the production

o f specially produced ingredients because those two,

they represented a process variation of our window to operate

for our current process.

We also expected a non-linear behavior, and we were told not to exceed 10 runs.

But it was strongly recommended to us that fewer runs

would be more than welcome because of capacity issues.

So we end up with a hybrid production/ laboratory approach,

where we ended up with only having six runs,

or we only needed six runs on the pilot scale or the production scale,

that they boosted to 10 runs in the lab.

That's basically the design which we ended up.

That's the way I created the design.

I set the linear and some interaction terms to "necessary"

in that custom design platform,

and the remaining interaction and quadratic term,

we set them to "optional."

Again, that was done, the creation of the design, before I knew about

the concept of SVEM, but in essence was then done using SVEM.

For us, in that particular case, the goal was to...

Let's say factor 1 and 2 represent the process variability.

What we wanted to achieve is we wanted to minimize a certain response,

in that case response number 7, to minimize its variability

without changing factor 1 and 2, because changing the process variability

which we [inaudible 00:17:56] have, so it's very difficult.

But we had two additional factors that were param eters

from the coating equipment which we could change very easily.

The goal was to minimize response number 7

but keep all the rest basically in the window which we wanted.

Again, analysis was done pretty much the same way like before.

We used SVEM like we presented in the last year's presentations.

We did 100 iterations.

We again use the full quadratic models with all the interactions,

and we used LASSO as the estimation method.

Just showing you that on the profile again, we had second order terms,

we had interactions as you can see here, for example.

They were all present, all what we expected to be there

like second quadratic dependencies and interactions actually were there.

For our goal was to minimize this response and without changing these factor 1 and 2,

because that's the process windows which we're currently operating in.

During the process range,

if you look at that response here, it changes quite a bit.

Doing all the optimization and having this model fully operational,

we found out if you just change factor, for example, number 3,

variability basically vanishes completely without changing the rest of the factor.

So we find with only six runs on a production scale,

optimal settings [inaudible 00:19:36] to enhance or optimize

a process quite considerably.

You might again say, "Okay, four factors.

You could only again may have used a definite screen design."

Or let's say, "Ten runs is not too few runs in order to do that."

But that brings me to my last and final example.

We were doing a design of experiments on the laboratory scale,

and we wanted to optimize or understand in more detail a polymerization process.

We wanted to do all factors and variations within one single design experiment.

We ended up with having 15 factors,

eight ingredients, and seven process parameters.

We didn't have a mixed design.

We used these eight ingredients, fiddled around a little bit

so we didn't end up with a mixed design.

But as the experiment itself was very time- consuming,

the number of experiments should be low, ideally below 30.

Because we had limited project times,

and 30 is something which we might be able to afford.

But at that moment, I already knew about the concept of SVEM

so I did the design of the experiment specifically knowing about

or applying the concept of SVEM.

So what I did, I chose the Bayesian I-optimal design.

We knew there will be non- linear behavior,

so all the second order terms and interactions were in there.

What we ended up was a design that you can see here, having only 23 runs,

and having 15 factors of ingredients plus process parameters,

basically a full RSM model and only have like 23 runs.

For me, it was pretty amazing to be able to do that.

Let's see the result if it actually did work,

because we weren't so sure that it actually worked.

But we were convinced that it will be known at the time

that it actually does work.

Analysis was, again, done the same way.

The only difference, I won't show you a profiler for that example

because having 15 factors, quite complex models,

and having to implement it in JSL makes it very, very slow.

But I was told that it's going to be implemented in JMP 17,

which then will be much faster

so I'm really hoping to get JMP 17 as fast as possible.

What we found out is that non- linearity was present.

We had again interactions, and we had second order terms which were significant.

So again, we needed the full model.

The experimental verification is currently ongoing.

First results look very promising.

I haven't put them in.

Just in contrast to a classic approach,

we would have required much more experiments than just 23.

And again, to show you that we had very good prediction capability,

to just plot some predicted versus actual.

That's just for one response, it's very good.

Just to give you another one,

it's not as good as the first one, but still, for us, it's good enough.

So the predictions here are very good.

For the first experiment,

for the verification runs, it looks very promising.

Just to show you some image of the profiler.

It's unfortunate non- interactive.

Second order terms were present, interactions were present,

so all that what we expected was basically in the model.

With that, I'm at the end of the use cases.

I want to switch back.

Just to give you short summary.

Kay and Hersh, they put in their last year's D iscovery Summit

the title "Re-Thinking the Design and Analysis of Experiments"

and put in a question mark.

From my point of view, you don't need the question mark.

It's like that.

I'm convinced that SVEM will change the way DOE is used,

especially if you go for either very complex designs,

or large number of factors, or if you have to go through very costly experiments.

At least, that's my opinion.

It opens a new way towards DOEs

with minimalistic experimental runs or effort,

and at the same time, you can gather the maximum amount of information

and insight out of that without having to sacrifice maybe number of runs

or number of factors or the complexity of your models.

I'm convinced SVEM will change the way at least I use DOE,

and for me, it was pretty amazing to see that in action.

With that, thank you very much for watching,

and I hope I could show you that this concept is worth trying.

Thank you very much.

Comments
wjlevin

For those interested in learning more about SVEM and giving it a try with DOE or observational data, please check out our SVEM page at: 

http://predictum.com/products/svem/

 

There's also a FAQ link there.

 

If you have other questions - contact us at svem@predictum.com

 

 

@shs : Thanks for your talk yesterday Simon. I enjoyed your talk yesterday and directly had a look at the videos you mentioned.
Re-Thinking the Design and Analysis of Experiments? SVEM: A Paradigm Shift in Design and Analysis of Experiments 

 

It sounds like it would be useful in many situation.

Did you experience or can think of situations where SVEM is not an appropriate approach?

shs

@Benjamin_Fuerst : Hi Benjamin, if I create DoE's I always check if a plan applying SVEM for analysis is much better than a conventional plan (like DSD, etc.) in case of runs. If I can afford more runs, I go for the conventional custom designs but still use SVEM for analysis to see if I get a more accurate model. What I haven't tried were mixture designs and I never tried it with categorical variables. One thing keeping me from using SVEM more often is that the profiler and especially optimization become very slow.