Choose Language Hide Translation Bar

Consolidation and Integration of Data Workflows to Guarantee Manufacturing Process Robustness

We needed to create and develop an internal tool to standardize analyses, ranging from database interrogation to final assessment of capability of the process and special causes investigation.

The development started from a database with a simple structure (statistical lab checks) and gradually extended to bigger databases populated by online process measurements. 

The goal was to have a simple and quick internal tool so that we could:

  • Create a standardized data set, which involved the customization of the data download by filtering and selecting the most significant parameters from a graphic user interface.
  • Automate the analysis, which would allow us to standardize process capability analysis and develop effective graphs to have a quick detection of anomalies, through proper usage of control charts and main KPI trends.
  • Deeply investigate the process and link the results of an automatic analysis, with a focus on potential correlation between product characteristics and process parameters, which would allow us to identify the significant variables and find room for the quality level improvement.

Hello all. I'm Massimo Pampurini. I work as a Manufacturing Quality Engineer in Pirelli.

Hello, everyone. I'm Sara Sorrentino, and I work in Pirelli as well as a Process Quality Engineer.

We will present our project, a Consolidation and Integration of Data Workflows that we developed in the last year in our company. Where we started from, we started from a specific area that is the semi-finishing area. This is an overview of our manufacturing process, general overview. We started from the area that is the weakest area in terms of data analysis, so the semi-finishing.

Why the project was born? The main reason, first of all, the first step for us was very important, the need to guarantee a unique way of collection of reliable laboratory data in order then to ensure also that all the plants were reliable, will follow the same methodology of the collection of data. These give us the possibility to have unique worldwide databases Pirelli.

The two key points. The first points are fundamentals and enablers. Also, then to have an easy download of data directly in JMP that will allow the user to have a download of data quickly and efficient, also by filter on time interval and on plant selection. These will lead at the end on automatic analysis, both on process capability and also on control chart assessment on the main quality characteristics of interest.

Okay, where we started. We started from the standardization of the data. This is just an example of where we started from. Before we had only folders with Excel files in which the plans of the plants were collected, data that could not be guaranteed as a standard data collection. Up to now, we have developed in the last four years, more or less, in quality application dedicated to visualization and data elaboration, a specific tool in JMP based on the same data source of the application, and now we are moving to Python integration in our JMP tools.

The objective of the project that we developed in JMP followed three main points. First of all, we wanted to create a standardized dataset through the usage of a graphic user interface in JMP. Then the second important point was getting a way in order to achieve automatic analysis in order to have a standardized report on the process capability result, and also a tool for achieving a quick detection of anomalies through proper usage of control charts. The last but not least in terms of importance, having a tool that gives the possibility to go deeper on investigation on the process parameter itself, in order to understand from the online production data also the most significant variables.

Now we will go quickly through the three main points that Sara just showed. The first part is how we built our standardized data set. The first point is to develop a graphic user interface by means the user can select which plant and which is the time frame that he wants to create the starting data set. This is the direct output for data download. The data are already standardized. That means are already cleaned with the same criteria.

After creating the first data set that is the basis of the analysis, we integrated another script dedicated to specific capability analysis in which we can select the plant or other main parameters other than, for example, the materials. We developed different buttons dedicated for analysis by month or by material. What is important is that with one click, the user can get this report. It is the main report used in our plants in order to understand which is the real capability by material and by month, by specification, by characteristic, and can give the priority of intervention in order to get a real improvement in a short time.

Coming to the last two buttons of the tool for automatic analysis, we have added one specific button related to the analysis on process stability. We choose the best feasible control chart. We activated all the type also of alert in order to understand when we have special causes in order then to find a room for improvement, and a way for them go deeper in investigation of outliers cases, and also to keep constantly monitoring of the control chart assessment month by month in order to understand also if we have had a drift, a shift in the mean of the process itself, for example.

Then another important button was added related to the distribution, as we can see, but not only the distribution of the characteristics itself. We wanted to have always a focus also on the process capability result, both in terms of process capability and CP and CPK with respect to within sigma report. But we want also to keep an eye constant also to the overall sigma capability report, and so our performance capability indexes.

Coming to the last important point of the tool developed was the possibility to go on deeply investigation. Coming from the laboratory data then, we wanted to find the correlation going directly on the data on the online production in order to get all the process parameters that we have available, in order to understand how the link of the process capability on our quality characteristics is linked with the influence of the most significant process parameters. In order to do this, we go deeper on tools like the prediction profiler. Anytime, we are also careful in this part to activate the extrapolation control on when there is a correlation also between our X factors of the other process variables.

This is the last slide, we say, because we are taking the first overview and we want to share with you which are the next steps. Next step is, one, the integration with Python, and the other one is the extension of all these tools in other area that are our priority. Thank you for the attention and see you at the Discovery Summit.

Thank you for your attention. See you soon.