In today's fast-evolving data analytics landscape, efficient reporting tools are key to driving informed decision making. This presentation introduces the complete suite of JMP tools developed to simplify manufacturing execution system (MES) and statistical quality control (SQC) data extraction and analysis for users. Of particular interest is a JMP add-in, the result of a collaboration between Syensqo and Ippon Innovation, designed to streamline the creation of SQC PowerPoint reports, fully aligned with the company's established templates.
Key features of this innovative tool include:
On-demand SQC report creation: Users can quickly generate SQC reports tailored for review and customer distribution.
Interactive report review: Users can interact with the underlying data, hide or exclude rows, and regenerate reports based on these adjustments.
Automated report generation: The tool automates the creation of multiple reports, notifying users via email when they are ready, thus eliminating manual processes.
Seamless add-in deployment: The various add-ins are deployed and updated on multiple servers, managed through the Add-in Manager for easy access by users.
We showcase the capabilities of our tools within the context of our quality management process workflow. We also share insights from our implementation journey and explore how automation is transforming reporting practices.
Hello. I'm Nicola Brammer and I work for Syensqo. We're a company that manufactures prepreg for the aerospace industry, and I'm a Global Six Sigma Lead and Process Engineering Expert.
Hello. My name is Sophie Cuvillier. I've been a data scientist and statistician for the company Ippon Innovation. Ippon Innovation is a statistical consulting company based at Toulouse.
I would like to present our poster on taking data from different databases and automating the analysis and reporting. Syensqo, like a lot of manufacturing companies, has historically spent a lot of time and money on collecting and storing data. Data is collected both from production equipment by data historians like IP21, Wonderware and PI and also from test equipment stored in laboratory management systems such as LIMS. Traditionally very little thought has been put into how to access and use this data in a meaningful way, let alone be able to combine the inputs into a process such as raw materials and process settings and the why's, the key characteristics of the process.
Syensqo is now working very hard to find ways to extract more value from these data sources. The first step has been to develop add-ins to collate and tidy the data with a minimal amount of effort, with the additional requirements of being able to automatically produce reports from any of our 12 composites manufacturing sites using the standardized methodology for analyzing the data and a consistent format for presenting the information to the customer. In 2023, Syensqo engaged Ippon, third-party JMP integrator to help us write JMP add-ins to assist in this effort.
Sophie will now talk about the methods she's used to develop the add-ins.
We have three main objectives. First, we want to streamline the extraction and filtering of data no matter if it is MES or SQC data. We have developed a way to use toolbox for data manipulation so that even JMP beginners can easily manipulate the data they want, that we will present later on. Finally, we want to develop, and we want to automate the SQC report generation.
It presents a general workflow of all the different add-ins we have developed and particularly on the SQC report add-in. First, you will see that in all of the JMP admins we have developed the user. Within the user interface, the user can very easily specify what they want to extract. For instance, he can specify the period or even add some filters if he wants to add on the filters. The query sent by the user will be sent to the database, so LIMS or MES, it depends. Then as we get back the data… Sorry, Nicola. It was not this report even. No. Maybe you can rebegin?
Yeah.
Wait just a second Nicola. Or you can go back to the PowerPoint before we restart. In all the method and objectives, I'm not showing anything.
I want to [inaudible 00:03:55]. Sorry, that's my fault.
No issue because we can rebegin. On the one method objective, I don't show anything. I would just show the diagram at the right. That's it. I will show after the report table at the page 3.
Okay. Sorry about that.
No problem.
Florence if we could restart. Sorry.
It's okay. I feel that.
Will we start the introduction also?
Yeah. We'll have to do everything.
Yes. You have to start again. It's 04:12, that's okay
Hopefully not too many times. Hello. My name is Nicola Brammer and I work for Syensqo. We're a manufacturer of carbon fiber prepregs for the automotive and aerospace industry. I'm a Global Six Sigma Lead and Global Process Engineering Expert.
Hello. My name is Sophie Cuvillier. I'm a Data Scientist and Statistician from Ippon Innovation. Ippon Innovation is a statistical consulting company based at tools. I'm also a black belt in Six Sigma.
We're going to present our poster on extracting data from SQC and MES databases. The Syensqo, like a lot of manufacturing companies, has historically spent a lot of time and money on methods for collecting and storing data. Data is collected from production equipment via data historians such as IP21, Wonderware and PI and test data is collected and stored in laboratory information management systems such as LIMS.
Traditionally very little thought has been put into how to access this data and to use it in a meaningful way, let alone to be able to merge information relating to the inputs of a process such as raw material, properties and process settings and the why's, the key customer characteristics.
Syensqo is now working very hard to find ways to extract more value from these data sources. The first step has been to develop add-ins that collate and tidy the data with a minimal amount of effort, with the additional requirements of being able to automatically produce reports from any of our 12 composite manufacturing sites using a standardized methodology for analyzing the data and a consistent format for presenting the information to the customer. In 2023, Syensqo engaged Ippon, a third-party JMP integrator to write JMP add-ins to assist us in this effort.
Sophie is now going to go through the methods of how she developed the add-ins.
Our objective are mainly to streamline the extraction and filtering of MES and SQC data. We have also developed a way to use toolbox for data manipulation so that event JMP beginners can easily do some data manipulation. Finally, we want to automate the statistical quality control report generation.
We'll present you in the little diagram at the right Syensqo workflow we are using to develop our add-ins notably for the last one that is the generation of SQC report. First, you will see in the demonstration later on but all of our user interface allows the user to easily specify what data they want to extract. For instance on which period. You can even add very easily filters if you want to see the data of one specific product grade or lot or whatever you want to be done.
Once it's finished the add-in will translate the user query into a real SQL query on the database, so it can be LIMS or NES database, we are doing both of them. Finally, the add-in will get back the data from the database and from this we can bring different output depending on the add-in and the needs.
For instance in the context of the SQC report generation add-in, one of the key output is the PowerPoint which is the SQC reports. We will show again later on. The issue with that is that just the PowerPoint generated by JMP is not very pretty at the start so on top of GSL we are using other script like VBS or Python to format a bit the PowerPoint we obtain. For instance if we want to align images, tables or change in textile and so on, this is a formatting step.
Last step which is very, very important in our add-in is the reviewing step because user will get back all the output from the analysis. At any moment you will see in our add-in it can review the output. For instance, if you want to modify data, if you want to exclude some rows and need to exclude it in the data or so on you can do it very easily and with just one button which generates the whole report instead of doing all the change manually. This is a very key point.
Now we're going to demonstrate some of the user interfaces. I will quickly show you an example of the user interface. The user interfaces allow the user to select the parameters they want to only extract the information that they want. The user interfaces for each of the different queries whether it's process data or laboratory information management system data all look very similar. That means that the training is more easily done with our different operators. Particularly operators that don't use JMP a lot, so we try to standardize the output that they see.
Once they've selected the settings that they want they can easily save the report with all the parameter settings and then reload them up at a future date. Those parameter settings are saved in a data table that looks like this. You can see that there's tags in here and these tags include both continuous data such as machine speeds and temperatures but also contextual data such as batch numbers, recipe numbers.
The output table from running the script also contains a number of subscripts that help us to examine the data or to add additional tags or rows of data at a later date.
Sophie is now going to talk about the automated PowerPoint presentation in particular.
For this add-in, we are generating reports. To generate reports, we first need to define what is in the reports so the user can add any time. In the add-in, we have a table that we call reports config that is generated by the add-in but by default it is empty. In this table, the user will manually enter the information of the different reports he wants to be able to run with the add-in. This table will be read later on by the add-in.
For instance, here it is a table. All the columns are already put there by the add-in. The user just have to fill in the column information. For instance, the location. The location will be from which server we're extracting data. Then we have three columns like product grade and spec detail name and matter descriptions. These are features we are going to apply. For instance, here in this report we are only going to extract this product grade, this particular name and this material description. These are filters we'll add in the query. Of course, this is a need of sense scope that you can adjust the idea on your company and your needs.
Then we have two columns about the pilot: a pilot value and pilot unit. This is how much data you want to extract in this report. Typically, here we are going to extract 24 months of data in this report. Then we have frequency value and frequency units. You will see later on that there's a notion of new frequency in the sense that these reports will be generated every 13 weeks, and you can adjust the frequency. We have other columns we are using in the context of our needs so you can put anything you want in the context of your company.
Once the user have added or modified the reports he wants to generate, he can run the add-in, and you will see that in the add-in user interface you will see the report table. In the user interface, the user will see the same report table. He can then select one or several reports to be run. For each report run there is in total six outputs that you can see here.
First, we have five gen tables for every report. The most important one would be the row table. If we open the row table it's, like we say, very alike as the other ones because we are trying to standardize every output in different items so that the user is not lost in each admin. It's just more data from the database. You can see again at the left table variables that we would call the information of the reports such as the period of extract and the filters.
You can see again the additional tools script that is very useful if the user wants to do some basic data manipulation even if it is a JMP beginners like color coding or even showing the limit table or control charts very easily. This is a row table. You can interact, of course, with this row table.
We have also the unstack table. Basically, the unstack table is actually the same as the row table but instead of having the parameters in rows, we have them in columns which is more practical if you want to use some control charts.
Also, one table we could show is a limits table. In this limits table we are just spec and control limits for each parameters. It is already formatted by the IDC, so you don't need to search for this limit in the table. Of course, the most important output for this IDC is the SQC report. This is a PowerPoint. What we show here is the very raw wizard from the admin. We haven't shared, attached anything or modified anything in the PowerPoint. It's what the admin gave.
Of course, here we have the Syensqo template behind, but you could adjust this and use your company template. First here you have the cover page where the information are already filling with the one from the report automatically so the user don't have to do anything.
Then in page 3, for instance, we have the parameters. We call it the parameters and the limit tables as you could have also in the output gen table. We have also the list of batches and the app release dates. We have the part number and description.
For each parameter, we have plotted the control charts. We've collocated by the bound plot and with the spec limits and control limits. At the very end, we have also the scenario capability for each of our parameter.
At this point, the user just have to fill in the conclusion and recommendation page. Everything was already automated by the admin. It doesn't need to do anything. As I told you, in the add-in he has an option of previewing. In the table I showed you earlier the user could modify the data and just with one button we generate the PowerPoint by taking into account the modification he has done.
This was all for the on-demand mode. Then, of course, when we saw that it was very used, the on-demand mode, we have tried to automate this on servers. If we look at the pastry of the presentation, what we have done is that… This is the left bottom part. We have a JAMScript. CM Square different Windows Server around the world with different data and different reports that need to be generated.
On each of these servers, using a scheduled task, we have a plain JavaScript to be run on each day. We'll look at the reports on each server, check which reports need to be generated using the frequency. For instance, if you see that some report needs to be generated every 13 weeks, and it has been 14 weeks it does not be updated, it will generate it. This is the idea. Once the reports have been generated there is an email notification to quality engineers to warn them that some reports was generated, and you can check it and review it if needed.
To show you, this is the output of the email. It looks like this. It is sent from a generic email address. The content of this email is basically which report has been generated if it has been generated successfully or not. For instance, here we see that the report five has been generated, and we have the location of the output. Here it is on shared drives that the output of the report are output. Of course, the user needs to have access to the shadow work, or otherwise, he cannot access to it. In this location you would find the five gen tables and PowerPoint I show you earlier. There is also attached the log in case there is no intergeneration of the report. This was the whole idea for SQC automated report add-in.
The aim of this add-ins has been to reduce the data gathering and tidying burden on the quality and production engineers to ensure that we have a standard methodology for reviewing the data and to produce consistent customer outputs from all our sites while still allowing us to have full access to the data set for any additional problem-solving or more in-depth exploratory data analysis. The feedback from our users has been extremely positive. With many engineers saying that it's reduced the burden on them from between half an hour to 8 hours per query, and they're using these tools daily.
The greater the level of instrumentation on process equipment and the more recipe information that's available, the greater rewards as this allows data tables for inputs and outputs to be easily combined. This work is built on initial add-ins that were developed by the Syensqo data analytics group for extracting process data, but has been taken to another level by the ability to combine both continuous and contextual information in the same database. The original add-ins developed by the Syensqo data analytics group can be found on the JMP community website.
Thank you, and we hope you've enjoyed our presentation.
Thank you.
Presenters
Skill level
- Beginner
- Intermediate
- Advanced