The Challenge of Optimizing Products and Processes
Jul 9, 2008 8:55 AM
(NOTE: This is part one of a three-part series on stochastic optimization.)
To get to the top of a hill, you just keep going up. However, hills can have subpeaks, so sometimes you have to hunt around to keep going up. But going up is still the basic idea. This is what optimization is — finding the top of the hill. Operations research is about solving optimization problems more generally, with higher dimensional hills that might have fenced areas that are off-limits.
Now imagine that instead of climbing the hill, you ride on a helicopter; you just tell the helicopter where to go, and then you parachute down from 5,000 feet above that location. Sounds easy. But there are clouds, so you can't see the hill itself, and there are random gusts of wind that can blow you hundreds of meters in any direction. Also, you have to land above a certain altitude, or you will get sick. You do get a few trial drops at different GPS locations, but you have to live around that target location, and you get one jump a day. Welcome to the world of stochastic optimization. Getting to high altitude is now a very messy business.
Why study something that behaves this strangely and is this frustratingly difficult to understand?
Well, it turns out that the future quality of the world's products and processes depends on just this type of situation. We try to optimize our products and processes, but then it turns out that the input factors vary, and the products and processes are no longer optimal. The input factors might change due to environmental factors: You know how to grow the best yielding corn crop, but unfortunately, you can't seem to control the weather to get the optimal yield. The input factors may vary due to natural variation: Your ingredients are the output of some other process, and you can't get all the variability out of that process — you can often control where the center of the distribution is of each factor, but you can't reduce the variation.
The literature on this kind of optimization is not particularly rich. The field of study for this application is called robust process engineering — the struggle to make products and processes that behave well in the face of variation.
The first good attempt at solving this kind of problem came from a Japanese engineer, Genichi Taguchi. He said that you construct an experiment in two directions. There are the Control factors that you assume are fixed, not subject to random variation. Then there are the Noise factors that in production you can't control completely — they have random variation. In an experiment, you might be able to control them, e.g., you can control the weather for a corn crop by growing it inside and controlling light and water. (In agriculture, that kind of experimental place has a name: phytotron.) Then you cross the experiment across both the Control factors and Noise factors. Next you derive the noise variation across the Noise design for each Control setting. Then you optimize with respect to both mean and variation, or some combined measure, a so-called signal-to-noise ratio.
This worked. Taguchi clubs sprouted up all over the world, and engineers learned Taguchi’s method.
Some Western statisticians looked at the method and said, “We can do better.” Various schemes emerged along with a recognition of what you should be looking for, which was this:
There may be a lot of places on the hill that have good altitude, but among those good places, try to find the place that has the widest, flattest area around it. Then when you are randomly dropped around that target, you are likely to land in a narrow range of altitudes.
For example, below you see the contours around Longs Peak in Rocky Mountain National Park. If you want to parachute to above 13,400 feet, then — rather than aiming for the peak above 14,200 feet, risking going off-course and landing at 12,400 feet off the northwest face — you aim for “The Loft,” which is a wide target above 13,400 feet.
In my next blog post, we will see a real example involving some high-tech cooking, a chemical reaction example, and how a classic example with a well-known optimum gets its lesson reversed when variation is taken into account.
UPDATE: The second and third blog posts in this series are now available.
Credits: Warren Sarle wrote two neural net papers years ago about how optimizing is like climbing to the top of a hill, and that was the inspiration for my analogy. The map is from Google maps.