Hello . My name is Astrid Ruck . I'm working as Senior Specialist for Autoliv. Autoliv is a worldwide leading manufacturer of automotive safety components such as airbags , seatbelts, and active safety systems . Today , I would like to show you an automated process of controlling product test data and creating alarm reports for root cause analysis .
We will start our presentation with a video on the working method of Autoliv's most important product test machine called ATOS. These machine s make a 100% control such that no defect part will be delivered . The resulting tests will be written into a log file , including additional information, and automatically send to a server in Amsterdam .
In the blue circle , you see our usage of JMP . So in the first step , the log files are transferred into a database and daily reports are created which are saved on the server .
If and only if there is an alarm, a second table is used from the traceability system of our relief call ed Atraq which includes component information of every retractor . This is used for predictive screening for root cause analysis in our alarm report . Alarm report is saved to the server and we use an HTTP post to send this link to our Autoliv's dispatch system which is called Leading 2Lean . Leading 2Lean sent an automated mail to the corresponding machine owner .
Here we see the retractor . It has an orange webbing and a clear cover . Let us start with a video . So this is the Atlas machine . Here you see the retractor but now we have a black cover instead of a clear cover . Here you see the webbing, and sometimes you will see a little marker on the webbing , because then you can see if there is a webbing extraction or retraction .
We will start with the tilt lock testing, and tilt lock testing is to ensure that the blocking off your seatbelt in the case of a roll -over scenario . We start with a tilt lock right testing, and here in this little display you see the corresponding tilt lock angle , which should be between 15 and 27 . So let us run it . It tilts to the right , it tilts to the left , it tilts for ward , and it tilts backward .
The next step will be the measuring of the webbing lengths , because this also belongs to the blocking system . Take a look here at this right -hand side , and now it starts the webbing measurement, and now already here , very short —might go a little bit back — Web Sense Lock and No Lock is tested here in this little box with a sensor . Web Sense Lock is to ensure blocking of webbing extraction in a case of a crash . But Web Sense No Lock is to ensure free wheeling if you're in a parking position .
These are all informations which are written into the log files , including machine parameters and an internal barcode . This in turn of barcode is unit per retractor . It includes the retractor number, its global line ID , its production day, and the key index .
Soon these log files are transferred once per day to a server . It is not SPC . It is used for a root cause analysis , therefore , we don't want to disturb the testing of the products . Therefore , the transfer time is between two shifts and it is synchronized within Autoliv facilities, but different between Autoliv facilities .
For example , in Hungary , the log files will be transferred at 6:05 . And in Romania , the lot folks will be transferred at 7:15 . And here you see the folder structure on the server . It starts with the directory of the plant , so Autoliv Hungary, Autoliv Romania , then in each folder of the plant you find separate folders of the machines . And then in each machine folder , you will find folders of the year and the months , and in the last stage , you will find the daily log files .
Since JMP 16 , each action in JMP is recorded in the Enhanced Log . Data work can now be saved as playable script per point -and -click .
Jordan Hiller from JMP, he says, "JMP writes 90% of your code , the skeleton ." So the consequence is that the other 10% is learning by doing, and this presentation is to give you a small idea how you can write your own scripts for automated data analysis and root cause finding . So I will give you some little short scripts which you can copy and paste into your own scripts .
In the beginning, we would start with the multiple file import, then we create the relevant columns . We select the relevant columns and delete the irrelevant columns . This procedure is independent of the order of columns because sometimes some columns are added in the log file . But this procedure takes the name of the columns and we are quite independent of any other order tree . Then of course we clean the data in the first step .
Here we have an example . We have a product family and there is an empty space and retrac tor triple X , but we would like to have retractor to triple X without this empty space , and here you see this corresponding script . Then we transpose the data into the database . We use the command from JMP new SQL query , we say what kind of connection string we would take , and then we use the function in JMP of custom SQL and write our SQL command and then we run it in the foreground because run foreground ensures that the transfer in the database will be complete before the next procedure will run . And don't forget to close all and to exit .
So here we start with the multiple file import, and here you see once again the folder structure , and here in the beginning you see what kind of folder you select . One of the best thing is that you can see include subfolders because our daily log file is in a very sub-sub folder , and this helps us a lot . We are interested in log files and we are interested in data from 6:15 to 6:15 , and it was the 21st of December last year when we uploaded the log files .
Here you can see the relevant files which are found in this time slot with similar files aspect, and we see here a tabulator is my separator of the fields . This is the result of the multiple file import . The worksheet with the machine ideates the start date, start time, seatbelt, and here you see the results of the tilt lock testing . Tilt right result, it is pass and fail . T ilt right angle , here you have the angle and here you'll see the other things .
If you edit your source , then you get your script and now you can take this copy and paste it into your own script and that will be your first script you can run .
Now we would like to transfer this data into the database . We transfer 1,000 rows per loop . So here you see from the worksheet , the first row, the row number 1,000 . And we say get the rows from the first ro w and smaller is row number 1,000, and call it My List . So DT is my data table , here are the number of rows with R, and the row tells me, "I don't want to have the column names . I would like to have the values ."
And this is how the list looks like . Here you see that the upload date is in brackets or in quotation marks . And here is the start date and the start time also from type character , because we have had some difficulties to transfer the data, and this was a nice script and it works .
Here it's the end . So if you look here in this table , you see that left result is empty, is the character , and here left is also empty , which can be seen here by double quotation marks and here by a dot . But SQL doesn't know any numeric empty cells , and therefore we use the next trick to make a substitution .
First of all , we would like to get rid of the double quotation mark and would like to have only one of it. Therefore , we say substitute , and because this double quotation mark is a very specific character , we have to use backslash and exclamation mark in the front and then we replace it . Then we would like to get rid of the first and the last record . Therefore , we say remove the first and the last character and SQL doesn't know curly brackets . Therefore , we replace them with a round bracket .
Here in the case of the dot, we cannot directly remove it because it is also included in real numeric values , so we use a little trick . We say replace dot with the comma into null with the comma .
Here in green below, you see the resulting S QL where you list . This is the way how it should look like in SQL . So we have a queue once again, and the corresponding SQL command used in custom SQL is nothing else than a plain string , and that goes directly into the database .
The form is here once again . We use an SQL template and then we say , okay , insert . Here comes the name of the database . SPC is the name of the table in our database, and here in brackets , there are the column names in the database , then the values and table.
And now we use the same trick as before . We substitute table with x, and x is my value list . Here it is . And this is called SQL . Then we'll say new SQL query , your connection string . Then we use the function and JMP custom SQL, SQL, and if you would like to see how does SQL look like , it looks like this . One main trick I learned from the staff from JMP was to use this substitution . It's a very good tool to get such kind of commands .
Every program is started via Task Scheduler . So here it is a display of a Task Scheduler . On the page , General , here you can see myself as also author , and here , a trial run, whether I'm logged on or not , because it will also run at the weekend and on holidays . It'll run all the time.
Here it is quite necessary to choose such Windows Server , which belongs to your s erver you will have installed your JMP . So if you choose the wrong server here , you could have background processes .
Here we trigger our transfers, scripts daily at 6:15 . And if you check your history , then it should look like this . Your task should be completed . It shouldn't look like this, task stopping due to time out reached because that means you have some background processes and that's not good .
Here we have some field action in the Task Scheduler, and here we browse the location of the batch data file . And the batch data file is nothing else than the notepad . Here you have the location where JMP is installed , and here is the location of your JMP script . And don't forget to say exit at the end .
If you use a batch , then you have to use slash -slash -exclamation -mark in the first line . So not in the second , not in the third line . It must be in the first line .
And the key idea of every program we use is we have the same program , but still in the beginning, we say what kind of plant is it ? So for main here, we'll have the same program . But then here , instead of ALH , there will be ARO, like Autoliv Romania . And if we use the multiple file import in the beginning , then we say evaluate your plant . So the only thing you have to change is in the beginning , the plant name. That's all .
Here you see our daily log file . The structure is always the same , so it has two tables on the top , followed by two graphs . And here we see all tests over all machines . And here you see , we have had nine times not okay values of tilt lock overall , and a lot of okay values, and the corresponding percentage is given with 0.33% and 99 .67% .
Here we have the number of not okay, the percentage of failure, and same for pass . And here you see the absolute number of the test results pass and fail for Local Confection Line . So here we have three ATOS machines , and we can see that we have five times tilt right was not okay, three times tilt left was not okay , and one time tilt backward was not okay . And tilt lock overall is the summary of all four tilt lock angles, so here we have nine not o kay .
On the right-hand side , you see the same bar chart, but now the scale is different . Here we have a percentage scale . And as you can see that here for tilt right and t ilt left, our scrap rate is larger than 1% . Therefore , an alarm must be created .
But first of all , I would like to describe how daily reports could be created because the same idea is used to create alarm reports . First of all , we create a new window , which is a vertical box , and it is called Report . Then we create a second new window , which is a horizontal box , and that is called Table . In this third step , we create a table , call it tab1 , make a report out of it, and appended to the table of the horizontal box .
Then we make the same thing once again . So we have the second report , which would also appended to the horizontal box . And at the end , the horizontal box will be appended to the vertical box . And this is how it looks like .
If you would like to add some graphs , then you have to create one more horizontal box , which will also appended to the vertical box , and this is her . And then you will have the graphs below the tables . Here , once again , some ideas , some scripts . I hope it will help you .
We save our daily reports as a picture . We don't save it as a PDF because we are not interested in all this page breakage . We would like to have high flexibility and no additional software . If we would have used a PDF , then we would have had four pages , so Table 1 , Table 2, and two further pages for the graphs .
And so how do we store the string ? We create a variable , and this is nothing else than the path w here we would like to save our report . So here it is a report . Here it is ALH. As I said before , we say evaluate the plants and we will have the right plant there . And then here daily test result , here comes the timestamp, and we say PNG , and these vertical lines mean we concatenate everything and then we save the picture with this variable name and that's it .
Here I would like to show you the rules for an alarm for every level . So we have several levels. If we have more than 200 parts , then we will have an alarm if the scrap rate is larger than 1% .
If we have only a small number, so less than 200 parts , then we will have an alarm with not okay parts if we have more than five not okay parts , which means a scrap number of 2% . But we can also have a potential alarm if three parts are not okay .
On the first level , we take this table from the daily report overall machines . And if you take a look, then tilt lock overall, tilt right, and tilt left have more than three not o kays . So here , therefore we have potential alarms .
If we have potential alarms , we dive deeper . Now, we take the machine into account . So here you see machine 123xx , here comes machine . 124xx , and so on . And then you can see here , if we take the machine into account , then the scrap rate for tilt right and t ilt left is larger than 1% . Therefore , we have an alarm .
For tilt lock overall , the scrap rate is low , but we have an potential alarm . So we will create an alarm for this machine, and now we use Atraq . Atraq is a traceability system of Autoliv, so it has information which components are included in which retractor , so one retractor , and we have the total information of every part .
Here you see the display of using the database in JMP . You see two tables . The first table is that table which we transferred into the database based on our test results and re tractor information .
Then we have the second table , which comes from the traceability system . And if you press this little [inaudible 00:23:58] , then you come through this picture and we make a left outer join . And now how do we make our join ? We use the internal barcode . In the beginning , I told you that every retractor has a unique internal barcode , and this unique internal barcode is called serial . So if they match together, then I have all information and therefore I make a left outer join .
This is how an alarm report looks like . It starts once again with the table . It tells us the upload date , the location , which machine is in fact effected by this alarm . Here is a test , and here once again , the information about number and percentage of being okay and not okay . And the same information is given here in the graphs; absolute number , percentage number .
Now, we take the information from our matching from ATOS data . The first table is for ATOS table data , and here we consider tilt lock overall, and now you see this seatbelt is effected. We also consider machine parameter . And what does the 15 mean ? Here is a translation . It means lower specification limit . Upper specification limit is 27 and so on . So every value here has these kind of title .
Here below, these are the component data given by the traceability system Atraq. We have the CS- Ball, CS- Sensor, and every component has four columns: part number, lot number , box number, and supplier. Part, lot, box, supplier . So this is some information we forgot .
And now we start our predictive screening . First of all , we try to find out what was constant . If something is constant , then it will not have an effect on your okay and not okay values , and the remaining predictors are used in a predictive screening . Here you see direct ly the results of the combinations , and you see that the shift itself has a very high impact . So we like this predictive screening because it is easy to read for non -statistic ians and it identifies predictors which might be weak alone but strong when used in combination with other predictors .
And then based on this predictive screening , we append graphs to the report with a larm report, and we colour the graphs according to the predictive screening . First, we planned out the shift as relevant . Here you can see in blue the afternoon shift . You can see directly that there was no failure for the morning shift , and the red lines here are specification limits , but it starts in the afternoon .
Box Serial was also an significant predictor , and now you can see that the purple and the blue Box Serials also have an effect , and this is our root cause analysis . So test this Box Serials. They behave different to the others .
Now we save the alarm report, and we would like to send this link of the alarm report to Autoliv's dispatch system called Leading 2Lean . Leading2Lean is configured to automatically send notifications to the correct owner . Usually, you can send an mail , but sending a notification via Leading 2Lean includes a dispatch process, and it must be closed .
Here , this is the way we do it . First of all , we define a variable called alarm . It gives me the path and the location of the corresponding alarm report . Then we use an associative array, so we see also the site . And here as a description , we include the alarm . This is sent via H TTP request . Here we have the fields array , which we defined here before, and then we send it . This is the way , so make copy and paste the skeleton of your JMP script .
This is how Leading2Lean looks like , the dispatch system. Here we have a dispatch number , the name , the date when it was created . Here we have the link which we sent via HTTP request . And if you press it or you can also open it via email , then you will get this alarm report .
This total process , which I have described using queries , make predictive screening and make HTTP request , everything could be realized with JMP . And in the same way , I would like to go make such analysis for components based on subassembly .
As John Hiller said, JMP writes 90% of your code, the skeleton . I hope that I could have given you some more percentage for your own scripts . I hope that this presentation helped you a lot, and that you like it as much as me to work with JMP . Thank you .