cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
Optimization of a Chemical Looping Process by Optimal DOE and Statistical Modeling (2020-EU-45MP-313)

Level: Intermediate

 

Frank Deruyck, Lecturer, HOGENT University of Applied Sciences and Arts

 

In this presentation an optimal DOE and statistical models are created to maximize performance of a chemical looping process with CO2 capture to generate H2 and synthesis gas, potential new recourses for energy and circular economy. The complex fluidized-bed reactor used is subject to several possible interacting and quadratic effects, as well as random noise, so a thoughtful experimental and modelling strategy is necessary. In JMP the DOE and analysis platforms offer a wide variety of DOE preparation and model fitting options. This paper will illustrate how to decide between an orthogonal RSM, custom DOE and a DSD based on R&D criteria and goals, and model objectives and DOE diagnostics such as power, factor correlation and variance profile. Model building occurs by screening out effective factors using stepwise regression (fixed factor forward selection and all possible models) followed by REML analysis eliminating random noise variance. Useful models for methane conversion and synthesis gas yield are obtained and supported by additional validation experiments. The profiler desirability function is used to compute the optimal operation conditions. This work demonstrates the possibility of optimizing a complex technological process with a careful DOE setting and statistical modeling approach.

 

Comments
rkenett

great talk - thank you.

 

The issue i wanted to raise are aspects of re-randomisation due to difficulties in the original randomisation sequence. This is important if you want to invoke causality arguments.

OK thanks for comment, could you please explain how this could be an issue in my study?

rkenett

Blocking is allowing you to run experiments that have smaller constraints on randomisation, i.e. they are easier to perform. Randomisation is key to make causality statements which is what you need for improvement and troubleshooting. Your validation experiments took care of some of it so your study is pretty complete. The issue I raised is about the need for re-randomisation because the sequence of experiments you get from the initial randomisation is difficult to run or will impede the generalisation aspects of the study, specially in the time related context such as yours. How to account for re-randomisation (giving this another try) is not obvious. David Cox, in his book on the planning of experiments (Cox, 1958), deals with it and the REML you referred to is one option mentioned by Cox in an early version. The recovery if inter-block information is a key inspiration for the REML work of Patterson & Thompson 1971 & the later improvement by Kenward & Roger 1997 https://journals.sagepub.com/doi/abs/10.1177/0962280214520728?journalCode=smma

 

 

Below is a paragraph from my book with Shelly Zacks on Modern Industrial Statistics that refers to this:

"Blocking and randomization in planning of experiments are aimed at increasing the precision of the outcome and ensuring the validity of the inference. Blocking is used to reduce errors. A block is a portion of the experimental material that is expected to be more homogeneous than the whole aggregate. For example, if the experiment is designed to test the effect of polyester coating of electronic circuits on their current output, the variability between circuits could be considerably bigger than the effect of the coating on the current output. In order to reduce this component of variance, one can block by circuit. Each circuit will be tested under two treatments: no-coating and coating. We first test the current output of a circuit without coating. Later we coat the circuit, and test again. Such a comparison of before and after a treatment, of the same units, is called paired-comparison. Another example of blocking is the famous boy’s shoes examples (pp. 97 in Box, Hunter and Hunter, 1978). Two kinds of shoe soles’ materials are to be tested by fixing the soles on n pairs of boys’ shoes, and measuring the amount of wear of the soles after a period of actively wearing the shoes. Since there is high variability between activity of boys, if m pairs will be with soles of one type and the rest of the other, it will not be clear whether any difference that might be observed in the degree of wear out is due to differences between the characteristics of the sole material or to the differences between the boys. By blocking by pair of shoes, we can reduce much of the variability. Each pair of shoes is assigned the two types of soles. The comparison within each block is free of the variability between boys. Furthermore, since boys use their right or left foot differently, one should assign the type of soles to the left or right shoes at random. Thus, the treatments (two types of soles) are assigned within each block at random. Other examples of blocks could be machines, shifts of production, days of the week, operators, etc. Generally, if there are t treatments to compare, and b blocks, and if all t treatments can be performed within a single block, we assign all the t treatments to each block. The order of applying the treatments within each block should be randomized. Such a design is called a randomized complete block design. We will see later how a proper analysis of the yield can validly test for the effects of the treatments. If not, all treatments can be applied within each block, it is desirable to assign treatments to blocks in some balanced fashion. Such designs, to be discussed later, are called balanced incomplete block designs (BIBD). Randomization within each block is important also to validate the assumption that the error components in the statistical model are independent. This assumption may not be valid if treatments are not assigned at random to the experimental units within each block."

Great comment thanks! Frank