Any production environment can generate vast amounts of process logs, whether in manufacturing, software development, sales, or finance. Buried in these plain-text logs is valuable information that can be used to monitor system health, identify failure points, and drive continuous improvement. This presentation describes how the JMP DevOps team built an automated pipeline to convert unstructured console logs into JMP data tables and publish daily diagnostic visualizations to JMP Live.

Using a JSL script with HTTP Requests and JSON parsing, we collect log data from our Jenkins task scheduling environment and use JMP’s regex capabilities to extract key details such as timestamps, progress messages, and error indicators. We used column tagging – a new data table feature in JMP 19 – to streamline the scripting and publishing process, which enables dynamic handling of new errors without manual intervention. The automated system runs each morning, parsing the previous night’s unit test logs and updating interactive graphs on JMP Live. This process has significantly improved our ability to detect, diagnose, and resolve recurring issues, ultimately leading to a more stable and transparent development pipeline.

 

 

Hi, I'm Melanie Drake, a Systems Engineer in JMP Development Operations. Our group provides automated processes that build JMP executables and installers, runs automated tests, and many other tasks to support JMP R&D.

Today, I'm going to talk about how we are collecting information from process logs and using JMP to find problems and monitor the general health of one of these automated processes. Processes create lots of data. They can be from instrument readings, measurements from a manufacturing production line, or status logs. They can be saved out to plain text or sometimes structured text, such as XML, JSON, or CSV. All of this data creates lots of opportunities for finding and fixing problems.

However, this data is often written to lots of separate files that are constantly being created and updated. It can be very hard to notice and track errors. I'll be talking about the nightly JSL unit test stream. This process produces many log files every day. We found that it was very hard to track and troubleshoot specific errors. We were swimming in data, but using it was very hard.

What is the JSL unit test stream? It's a suite of JSL regression tests that we run using each day's build of JMP. It runs every night on all versions of JMP that are currently under development, and it runs across a variety of Windows and Macintosh operating system versions. Testers use this nightly run to find errors in JMP so they can be quickly addressed by developers. If any particular test stream doesn't run successfully, they don't have those test results.

For a few weeks in the summer, we had four versions of JMP under development at the same time. This meant we were running 18 test streams every night on a combination of JMP and operating system versions. The entire stream can take 8-12 hours to run. We split each Windows test stream into three streams that run concurrently to allow all Windows test results to finish much more quickly. This lets our testing team in Beijing use those test results in the same day.

Putting it all together, we were looking at 36 separate console logs every day. We could and did spend a lot of time trying to find out where and when any particular error happened. The solution to managing disparate collections of data is to consolidate it all in one place. A JMP data table, of course.

Once in JMP, you can visualize and analyze your data, share your findings with others using JMP Live. In our case, we collected our logs into JMP data tables. We used graphs to track and troubleshoot all of our problems, and we can continuously monitor health by publishing to JMP Live. We ended up with two different data tables for our data because as our log output was different between Macintosh and Windows. Although some errors were common, many others were specific to one operating system.

I'm going to outline the general workflow first, and then I'll walk through the script and demo our process. The first step is to find out where your data is and how you're going to read it into JMP. Log files can be saved to disk or server. They can be accessed from the web. Maybe you retrieve them through an API or even a database query. In our case, we access to text file through the web-based UI from a Jenkins server, which is where we run all of our automated processes. We used HTTP request to get the information, and then we used Parse JSON to get the information in a format that we could use.

Step two is to import your data into JMP. This can start out with your log files, and you might have other metadata to add to that. You can put your entire log in a single cell, or you can spread your logs out across multiple cells or columns or even rows. It all depends on what your data is and how you need to look at it. In our case, we have one log per stream run. We put one stream run per row and one entire log in a single cell. In addition, we collect lots of other metadata as well.

Step three is to parse your text. You might use Parse XML or Parse JSON to get the raw information. Once you have that, maybe you use Regex, JSL pattern matching, or JSL character functions to parse out the information. You might use formula columns, indicator columns, or multiple response columns to collect that information into formats that you can use in your graphs.

In our case, we used almost all of the above. We used formula columns to gather its specific information for metadata columns, and then we used Regex and JSL character functions to parse the actual logs. We also used indicator and multiple response columns to gather information so we could make pretty graphs from them.

Step four is to explore and graph your data. JMP provides Graph Builder and many other graphing platforms. You can use these to find patterns in your errors, to find out where your errors correlate with each other, and to track them all through time. In our case, we used Graph Builder extensively. We created heat maps to see our correlations. We used bar charts and scatter plots to track errors across time, JMP versions, and operating system versions. We also used stacked bar charts and heat maps to see stream status and missing runs through time. I'll show some examples of these later.

Finally, in step five, you're going to want to share your graphs with your coworkers. You can present specific findings to specific audiences. You can share for visibility, transparency, and teamwork. You can also hide private, personal, or confidential data. In our case, we publish a subset of the data to an internal JMP Live server. We publish only what's needed for the graphs. Once those graphs are published, they are very easy to discuss in meetings, and it's very easy to monitor the ongoing general health of our entire process.

JMP 19 has a new feature called Column Tags. We found these extremely useful for this project. We're using three column tags, one to tag timestamps for certain milestones, one to tag errors that we find, and one to tag the columns that we use in graphs. I'll show you how these all work a little later.

Why did we do this? For one thing, it greatly streamlines our JSL code. We can refer to columns with a certain tag instead of hard coding every column name. This also means that we don't need to update column names when we update errors that we're finding, for example. We just tag the new column, and it comes in naturally. This makes it all very easy to subset and publish only the necessary data.

Now we'll go through a demo, and I'll show you our JSL script. Before we even start work, we run a few lines to set everything up. We include a couple of other scripts with functions and variable definitions. We set up the folder path where our main tables are saved. These hold all of the data we've collected so far. Then we open a CSV file with configuration information.

Our real CSV file has more columns than this, but they contain internal information that I deleted for this presentation. The basic information we need for each project is the name of each test stream and the operating system-specific main table that holds all of the data we've gathered from all of the test stream so far for this stream's operating system.

Now we need to read in our data. Each JMP version and OS version combination is a separate Jenkins project with its own page. Each test stream run is a subpage of the main page. We'll walk through this first project, JMP 19 on Sonoma. I'll set my iteration to 1. Then I'll read in the name of each job and the target table that we're to bring it into.

We start by using HTTP request to get the main page's JSON, which contains a list of the URLs for each stream run. We send that request to Jenkins, get the JSON, and parse it into our build list. Since I can't run this live here, I'll just create a list. You can see it's very long. This list contains the build ID of every single test stream run so far for JMP 19 on Macintosh Sonoma.

But we already have much of this data in the main table, and we don't want to waste a lot of time reading and processing data that we already have. To get around this, we open our main table. This has all of the Macintosh results, not just results for this project. Then we get a list of the build IDs we already have for this project. In other words, the IDs from the rows in that main table that match JMP 19 on Sonoma. We turn both of those lists of build IDs into associative arrays so that we can subtract the current IDs from our new list. Associative array math magic gives us a short list of just the runs that we haven't processed yet. It will be a lot easier and a lot faster to process 3 rows rather than 395 and then throw most of them away.

Step two is to actually import that data into JMP. If that list isn't empty, we use a function that we wrote that gets the data we want from each of the stream runs in our list and imports it into a data table. That function uses Parse JSON again to extract track the data we want and add it to the new data table into columns and a row for each test stream.

I'll open a sample table so you can see what it looks like. You can see we've got some of the metadata already. We get the duration of any particular stream, its name, its ID number, the status of the build, whether it succeeded or not, and the date time stamp for when it began. Most importantly, we have the URL that goes to the text log where we'll get all of our information from. The next thing we do is to add a column to this table to contain the log file contents, and we use load text file to read it from that URL and insert it into the table.

Step three is to parse the metadata and the console logs for these rows. We add some new derived columns. For example, we take that duration, and we turn it into a run time measured in minutes, which is easier to understand. We also add a date column that's just the date without the timestamp. This makes some of our graphs much easier. Finally, another column we add is the day of the week. We also use that in some of our graphs.

The next thing is to set the stage for parsing the log. We add columns to collect start times for certain milestones. We define different start columns for Windows versus Macintosh, and when we add these columns, we tag them as starts. You can see these green tags here that says all of these columns are tagged with the same tag. You can see them down here.

The next column that we add is last thing started. If the test stream stops running because of an error, we can get the last milestone that was successful. It will be the last column with a starts tag for this row that has a date time stamp. If the process loses its connection to the VM or to the server, we'll get JavaScript errors in the log. We definitely collect those.

Finally, we add a multiple response column to collect error information. I'll get back to this shortly. The next step is to actually start parsing that log. We use one loop to parse the entire console for each row, and grab the information we want and add it to all the new columns for that row. The console log is in a single cell for each row. We turn it into a list of strings where each line of the log is a string.

Then we loop through that list once and compare each line to the specific items we're looking for, start times, errors, et cetera. This is much more efficient than parsing the log once for each item we want to extract. The first thing we look for is a JavaScript error. If we find that, we get it and put it into the data table and then move on to the next row's console log. Once the job has been halted in this way, there won't be any information to get.

In our Jenkins projects, we look for specific errors, and those searches are printed at the end of the log and are called text finders. When we reach the first instance of a text finder, we collect to the end of the log. We stop looping through it right there because this is the end of the log. We feed that into a function, which I'll show you in a moment.

Other than those two things, we look for the milestones and insert those date time stamps into the table for each one. Finally, at the end of the loop, we collect those text finders. Each text finder that we find is added to the table as an indicator column with a zero if the error isn't found or a one if it is found.

That leaves missing where we didn't look for that error. For example, if it's a new one, so we haven't looked for it previously. Each text finder column is tagged as text finder. Once those indicator columns are filled in, we get the names of each column tagged as text finder that has a one in itself for that row and add it to the multiple response column, which is used later for graphing.

Finally, we get the last thing started and add it to the table for this row. Loops for subsequent rows will compare text finders it finds to the columns we already have and only add the ones we don't have yet. Here I'll open an example of what this table looks like after we run all of this. We still have all of the metadata that we collected initially. We have our log text. We have all of our derived data. Now all of our start columns have those date time stamps in them, so we know exactly when each one started.

Our last thing started has collected the last item that was successful. We didn't happen to have any Java disconnect errors, but if we did, they would be there.

Finally, we've added all of our text finder columns. You'll see that they are tagged as red. All of these runs were actually successful, but I put a few positive findings of errors, so I could show you what they look like in our multiple response column. You can see this one had two failures, and it's semicolon delimited. This one only had one.

The final step is to add all of this new information to our main data table. It's just a simple concatenation. We were going to add our new three rows to our table. The final step is to save this table. We compress the table when we save it, because those logs are sometimes hundreds or even thousands of lines long. With 1,333 rows, that's a lot of data, and these tables can get very large. That's the meat of the process and what took the longest for us to develop. Decided exactly what information we needed was an iterative process.

The final step is to explore and graph your data. We've saved many graphs we've designed to help us diagnose problems. I'll show you a few of them. This one shows the duration of a stream's run by the date and the operating system.

Short duration streams are usually ones that air out quickly and simply stopped. We very rarely have any results from them. A stream that took a very long time probably got hung up somewhere and will probably eventually time out. Those are all problems we need to look further into. This stream run is really consistent. That's always great to see.

Another useful graph is this one that shows correlations between JavaScript errors and the last thing that started successfully. That can be very useful in diagnosing problems. This final one is probably my favorite. It's a heat map that shows build status in missing runs.

You can see immediately that we run these test streams five days a week. We don't run them on the weekends. You can see this one has a lot of missing runs where we simply didn't run any test streams at all. These ran but had errors. All of these green runs were perfectly fine. Here are the three rows that we just added to this table.

Step five is to share your graphs. Since all of these are already published in JMP Live, all we need to do is update the table and the graphs in JMP Live. We create a connection to our JMP Live instance. This is defined in file, publish, manage connections. We open the data table, and then we run the subset script.

You might wonder why we even bother. You'll notice that our original main table has 39 columns. Our subset table has only nine, and those nine columns do not include the giant console logs. This makes these subset tables much smaller, which enhances JMP Live performance. It's also nice that we can very easily not publish private or proprietary data. We're only publishing the data that's strictly necessary for the graphs we're using.

We send that subset table and other information like the JMP Live connection to a function that we wrote that loops through every single graph in the table. It then updates every table. It then updates the subset table in every graph in JMP Live, and then it will move on to the next table, in this case, Windows. That finishes it. Every day this runs, we have brand-new results.

Here's a sample of some of the graphs that we've published on JMP Live. From our dashboard, these and other graphs give us a very quick at a glance, so we can see general trends and trouble spots very easily. Since this is JMP Live, we can look at each graph individually and drill down for extra information. All of our graphs, we publish with local data filters. This lets us look only at certain test stream runs during a particular time frame. For example, only JMP 20 test stream runs during the last few weeks.

In the end, what did we accomplish? We have automated our insight. All we have to do is look at the graphs to see what's going on. They are self-updating. We don't have to micromanage specific errors. It's self-publishing. With everything predefined, the process just works every day. As we worked through our data, we made decisions on what we needed to track. We changed what we printed to our console logs, like adding those starting times for milestones. We decided what graphs worked best to show the problems that we needed to solve.

By the time we had this script finished and running, we had the big win. We had solved all of our problems so far, and our graphs are not terribly exciting anymore, since the test streams usually run successfully now. Thank you.

Presented At Discovery Summit 2025

Presenter

Skill level

Intermediate
  • Beginner
  • Intermediate
  • Advanced
Published on ‎07-09-2025 08:58 AM by Community Manager Community Manager | Updated on ‎10-28-2025 11:42 AM

Any production environment can generate vast amounts of process logs, whether in manufacturing, software development, sales, or finance. Buried in these plain-text logs is valuable information that can be used to monitor system health, identify failure points, and drive continuous improvement. This presentation describes how the JMP DevOps team built an automated pipeline to convert unstructured console logs into JMP data tables and publish daily diagnostic visualizations to JMP Live.

Using a JSL script with HTTP Requests and JSON parsing, we collect log data from our Jenkins task scheduling environment and use JMP’s regex capabilities to extract key details such as timestamps, progress messages, and error indicators. We used column tagging – a new data table feature in JMP 19 – to streamline the scripting and publishing process, which enables dynamic handling of new errors without manual intervention. The automated system runs each morning, parsing the previous night’s unit test logs and updating interactive graphs on JMP Live. This process has significantly improved our ability to detect, diagnose, and resolve recurring issues, ultimately leading to a more stable and transparent development pipeline.

 

 

Hi, I'm Melanie Drake, a Systems Engineer in JMP Development Operations. Our group provides automated processes that build JMP executables and installers, runs automated tests, and many other tasks to support JMP R&D.

Today, I'm going to talk about how we are collecting information from process logs and using JMP to find problems and monitor the general health of one of these automated processes. Processes create lots of data. They can be from instrument readings, measurements from a manufacturing production line, or status logs. They can be saved out to plain text or sometimes structured text, such as XML, JSON, or CSV. All of this data creates lots of opportunities for finding and fixing problems.

However, this data is often written to lots of separate files that are constantly being created and updated. It can be very hard to notice and track errors. I'll be talking about the nightly JSL unit test stream. This process produces many log files every day. We found that it was very hard to track and troubleshoot specific errors. We were swimming in data, but using it was very hard.

What is the JSL unit test stream? It's a suite of JSL regression tests that we run using each day's build of JMP. It runs every night on all versions of JMP that are currently under development, and it runs across a variety of Windows and Macintosh operating system versions. Testers use this nightly run to find errors in JMP so they can be quickly addressed by developers. If any particular test stream doesn't run successfully, they don't have those test results.

For a few weeks in the summer, we had four versions of JMP under development at the same time. This meant we were running 18 test streams every night on a combination of JMP and operating system versions. The entire stream can take 8-12 hours to run. We split each Windows test stream into three streams that run concurrently to allow all Windows test results to finish much more quickly. This lets our testing team in Beijing use those test results in the same day.

Putting it all together, we were looking at 36 separate console logs every day. We could and did spend a lot of time trying to find out where and when any particular error happened. The solution to managing disparate collections of data is to consolidate it all in one place. A JMP data table, of course.

Once in JMP, you can visualize and analyze your data, share your findings with others using JMP Live. In our case, we collected our logs into JMP data tables. We used graphs to track and troubleshoot all of our problems, and we can continuously monitor health by publishing to JMP Live. We ended up with two different data tables for our data because as our log output was different between Macintosh and Windows. Although some errors were common, many others were specific to one operating system.

I'm going to outline the general workflow first, and then I'll walk through the script and demo our process. The first step is to find out where your data is and how you're going to read it into JMP. Log files can be saved to disk or server. They can be accessed from the web. Maybe you retrieve them through an API or even a database query. In our case, we access to text file through the web-based UI from a Jenkins server, which is where we run all of our automated processes. We used HTTP request to get the information, and then we used Parse JSON to get the information in a format that we could use.

Step two is to import your data into JMP. This can start out with your log files, and you might have other metadata to add to that. You can put your entire log in a single cell, or you can spread your logs out across multiple cells or columns or even rows. It all depends on what your data is and how you need to look at it. In our case, we have one log per stream run. We put one stream run per row and one entire log in a single cell. In addition, we collect lots of other metadata as well.

Step three is to parse your text. You might use Parse XML or Parse JSON to get the raw information. Once you have that, maybe you use Regex, JSL pattern matching, or JSL character functions to parse out the information. You might use formula columns, indicator columns, or multiple response columns to collect that information into formats that you can use in your graphs.

In our case, we used almost all of the above. We used formula columns to gather its specific information for metadata columns, and then we used Regex and JSL character functions to parse the actual logs. We also used indicator and multiple response columns to gather information so we could make pretty graphs from them.

Step four is to explore and graph your data. JMP provides Graph Builder and many other graphing platforms. You can use these to find patterns in your errors, to find out where your errors correlate with each other, and to track them all through time. In our case, we used Graph Builder extensively. We created heat maps to see our correlations. We used bar charts and scatter plots to track errors across time, JMP versions, and operating system versions. We also used stacked bar charts and heat maps to see stream status and missing runs through time. I'll show some examples of these later.

Finally, in step five, you're going to want to share your graphs with your coworkers. You can present specific findings to specific audiences. You can share for visibility, transparency, and teamwork. You can also hide private, personal, or confidential data. In our case, we publish a subset of the data to an internal JMP Live server. We publish only what's needed for the graphs. Once those graphs are published, they are very easy to discuss in meetings, and it's very easy to monitor the ongoing general health of our entire process.

JMP 19 has a new feature called Column Tags. We found these extremely useful for this project. We're using three column tags, one to tag timestamps for certain milestones, one to tag errors that we find, and one to tag the columns that we use in graphs. I'll show you how these all work a little later.

Why did we do this? For one thing, it greatly streamlines our JSL code. We can refer to columns with a certain tag instead of hard coding every column name. This also means that we don't need to update column names when we update errors that we're finding, for example. We just tag the new column, and it comes in naturally. This makes it all very easy to subset and publish only the necessary data.

Now we'll go through a demo, and I'll show you our JSL script. Before we even start work, we run a few lines to set everything up. We include a couple of other scripts with functions and variable definitions. We set up the folder path where our main tables are saved. These hold all of the data we've collected so far. Then we open a CSV file with configuration information.

Our real CSV file has more columns than this, but they contain internal information that I deleted for this presentation. The basic information we need for each project is the name of each test stream and the operating system-specific main table that holds all of the data we've gathered from all of the test stream so far for this stream's operating system.

Now we need to read in our data. Each JMP version and OS version combination is a separate Jenkins project with its own page. Each test stream run is a subpage of the main page. We'll walk through this first project, JMP 19 on Sonoma. I'll set my iteration to 1. Then I'll read in the name of each job and the target table that we're to bring it into.

We start by using HTTP request to get the main page's JSON, which contains a list of the URLs for each stream run. We send that request to Jenkins, get the JSON, and parse it into our build list. Since I can't run this live here, I'll just create a list. You can see it's very long. This list contains the build ID of every single test stream run so far for JMP 19 on Macintosh Sonoma.

But we already have much of this data in the main table, and we don't want to waste a lot of time reading and processing data that we already have. To get around this, we open our main table. This has all of the Macintosh results, not just results for this project. Then we get a list of the build IDs we already have for this project. In other words, the IDs from the rows in that main table that match JMP 19 on Sonoma. We turn both of those lists of build IDs into associative arrays so that we can subtract the current IDs from our new list. Associative array math magic gives us a short list of just the runs that we haven't processed yet. It will be a lot easier and a lot faster to process 3 rows rather than 395 and then throw most of them away.

Step two is to actually import that data into JMP. If that list isn't empty, we use a function that we wrote that gets the data we want from each of the stream runs in our list and imports it into a data table. That function uses Parse JSON again to extract track the data we want and add it to the new data table into columns and a row for each test stream.

I'll open a sample table so you can see what it looks like. You can see we've got some of the metadata already. We get the duration of any particular stream, its name, its ID number, the status of the build, whether it succeeded or not, and the date time stamp for when it began. Most importantly, we have the URL that goes to the text log where we'll get all of our information from. The next thing we do is to add a column to this table to contain the log file contents, and we use load text file to read it from that URL and insert it into the table.

Step three is to parse the metadata and the console logs for these rows. We add some new derived columns. For example, we take that duration, and we turn it into a run time measured in minutes, which is easier to understand. We also add a date column that's just the date without the timestamp. This makes some of our graphs much easier. Finally, another column we add is the day of the week. We also use that in some of our graphs.

The next thing is to set the stage for parsing the log. We add columns to collect start times for certain milestones. We define different start columns for Windows versus Macintosh, and when we add these columns, we tag them as starts. You can see these green tags here that says all of these columns are tagged with the same tag. You can see them down here.

The next column that we add is last thing started. If the test stream stops running because of an error, we can get the last milestone that was successful. It will be the last column with a starts tag for this row that has a date time stamp. If the process loses its connection to the VM or to the server, we'll get JavaScript errors in the log. We definitely collect those.

Finally, we add a multiple response column to collect error information. I'll get back to this shortly. The next step is to actually start parsing that log. We use one loop to parse the entire console for each row, and grab the information we want and add it to all the new columns for that row. The console log is in a single cell for each row. We turn it into a list of strings where each line of the log is a string.

Then we loop through that list once and compare each line to the specific items we're looking for, start times, errors, et cetera. This is much more efficient than parsing the log once for each item we want to extract. The first thing we look for is a JavaScript error. If we find that, we get it and put it into the data table and then move on to the next row's console log. Once the job has been halted in this way, there won't be any information to get.

In our Jenkins projects, we look for specific errors, and those searches are printed at the end of the log and are called text finders. When we reach the first instance of a text finder, we collect to the end of the log. We stop looping through it right there because this is the end of the log. We feed that into a function, which I'll show you in a moment.

Other than those two things, we look for the milestones and insert those date time stamps into the table for each one. Finally, at the end of the loop, we collect those text finders. Each text finder that we find is added to the table as an indicator column with a zero if the error isn't found or a one if it is found.

That leaves missing where we didn't look for that error. For example, if it's a new one, so we haven't looked for it previously. Each text finder column is tagged as text finder. Once those indicator columns are filled in, we get the names of each column tagged as text finder that has a one in itself for that row and add it to the multiple response column, which is used later for graphing.

Finally, we get the last thing started and add it to the table for this row. Loops for subsequent rows will compare text finders it finds to the columns we already have and only add the ones we don't have yet. Here I'll open an example of what this table looks like after we run all of this. We still have all of the metadata that we collected initially. We have our log text. We have all of our derived data. Now all of our start columns have those date time stamps in them, so we know exactly when each one started.

Our last thing started has collected the last item that was successful. We didn't happen to have any Java disconnect errors, but if we did, they would be there.

Finally, we've added all of our text finder columns. You'll see that they are tagged as red. All of these runs were actually successful, but I put a few positive findings of errors, so I could show you what they look like in our multiple response column. You can see this one had two failures, and it's semicolon delimited. This one only had one.

The final step is to add all of this new information to our main data table. It's just a simple concatenation. We were going to add our new three rows to our table. The final step is to save this table. We compress the table when we save it, because those logs are sometimes hundreds or even thousands of lines long. With 1,333 rows, that's a lot of data, and these tables can get very large. That's the meat of the process and what took the longest for us to develop. Decided exactly what information we needed was an iterative process.

The final step is to explore and graph your data. We've saved many graphs we've designed to help us diagnose problems. I'll show you a few of them. This one shows the duration of a stream's run by the date and the operating system.

Short duration streams are usually ones that air out quickly and simply stopped. We very rarely have any results from them. A stream that took a very long time probably got hung up somewhere and will probably eventually time out. Those are all problems we need to look further into. This stream run is really consistent. That's always great to see.

Another useful graph is this one that shows correlations between JavaScript errors and the last thing that started successfully. That can be very useful in diagnosing problems. This final one is probably my favorite. It's a heat map that shows build status in missing runs.

You can see immediately that we run these test streams five days a week. We don't run them on the weekends. You can see this one has a lot of missing runs where we simply didn't run any test streams at all. These ran but had errors. All of these green runs were perfectly fine. Here are the three rows that we just added to this table.

Step five is to share your graphs. Since all of these are already published in JMP Live, all we need to do is update the table and the graphs in JMP Live. We create a connection to our JMP Live instance. This is defined in file, publish, manage connections. We open the data table, and then we run the subset script.

You might wonder why we even bother. You'll notice that our original main table has 39 columns. Our subset table has only nine, and those nine columns do not include the giant console logs. This makes these subset tables much smaller, which enhances JMP Live performance. It's also nice that we can very easily not publish private or proprietary data. We're only publishing the data that's strictly necessary for the graphs we're using.

We send that subset table and other information like the JMP Live connection to a function that we wrote that loops through every single graph in the table. It then updates every table. It then updates the subset table in every graph in JMP Live, and then it will move on to the next table, in this case, Windows. That finishes it. Every day this runs, we have brand-new results.

Here's a sample of some of the graphs that we've published on JMP Live. From our dashboard, these and other graphs give us a very quick at a glance, so we can see general trends and trouble spots very easily. Since this is JMP Live, we can look at each graph individually and drill down for extra information. All of our graphs, we publish with local data filters. This lets us look only at certain test stream runs during a particular time frame. For example, only JMP 20 test stream runs during the last few weeks.

In the end, what did we accomplish? We have automated our insight. All we have to do is look at the graphs to see what's going on. They are self-updating. We don't have to micromanage specific errors. It's self-publishing. With everything predefined, the process just works every day. As we worked through our data, we made decisions on what we needed to track. We changed what we printed to our console logs, like adding those starting times for milestones. We decided what graphs worked best to show the problems that we needed to solve.

By the time we had this script finished and running, we had the big win. We had solved all of our problems so far, and our graphs are not terribly exciting anymore, since the test streams usually run successfully now. Thank you.



Start:
Thu, Oct 23, 2025 12:30 PM EDT
End:
Thu, Oct 23, 2025 01:15 PM EDT
Trinity A
0 Kudos