For manufacturing and engineering operations in the semiconductor industry, standardized experiments and reports must be conducted on a regular basis. These evaluations are required for change management assessments, technology transfer activities, audit compliance, etc. The data used in these assessments are analyzed using JMP, Excel, and other software. They are then assembled into PowerPoint and/or Word for formal documentation.
Because analysts have varying proficiency levels with analytical and reporting tools, it is difficult to ensure efficiency, repeatability, completeness, and compliance. Currently, manual analysis can take anywhere from hours to weeks and does not always follow precise analysis procedures or reporting formats. It usually focuses on known impacts, ignoring unexpected ones, especially when there are differences in data repositories, analysis tools, success criteria, or organizational culture.
This presentation demonstrates the effectiveness of JMP Workflow Builder in technology transfer processes, providing a way to combine single tasks – such as formatting, importing, data cleaning, statistical analysis, and reporting – into a single workflow. The result is more robust and efficient comparison reports across multiple platform types, produced in minutes. This workflow not only improves repeatability, but also helps the user complete tasks completely and correctly, enhancing compliance while reducing costs and improving team efficiency!
This live demo shows how easy it is to create a workflow prototype using JMP Pro 18. Knowledge of SQL or Python is not necessary to benefit from this JMP automation. The result is a report that follows layout and evaluation standards, while keeping data integrity intact!

Hello. Welcome, everyone. Today, we will explore how JMP scripting and workflow automation can improve efficiency in the semiconductor industry. Quick introduction of the presenters. I'm Su-Heng Lin, Senior Principal Engineer at NXP Semiconductor, and Martin Demel is a Principal JMP, System Engineer. Let me walk you through today's presentation. We will begin with the motivation, why workflow automation is critical in semiconductor manufacturing. Then, we will look at the wafer fabrication process to understand the complexity we are dealing with. Next, we will compare process change and technology transfer, highlighting their difference and business flows. We will dive into how data-driven frameworks like DOE and the Cpk analysis support evaluation and decision-making. After that, we will review performance benchmarking and electrical parameter distribution. Then comes the highlight, a live demo of JMP workflow builder by Martin. Finally, we will wrap up with key takeaways and open the floor for Q&A.
In semiconductor engineering, engineers routinely conduct standardized experiment and generate reports for change management, technology transfer, and audit compliance. These reports are often built manually using tools like JMP and Excel, then compiled into PowerPoint or Word document. By this process, it is time-consuming, inconsistent, and sometimes incomplete. A melodist may vary in skill, leading to inefficiency and gaps in repeatability and compliance, especially during technology transfer across sites.
Today, we will demonstrate how JMP workflow builder can streamline this entire process by automating tasks like data cleaning, statistical analysis, and reporting. We can produce robust, standardized output in minutes. No coding is required, and the workflow ensures data integrity while improving team efficiency. Later in the section, Martin will show how easy it is to build one using JMP19 Pro.
Semiconductor manufacturing involves hundreds of steps and a long cycle time. Let's take a moment to appreciate the complexity of wafer fabrication. We begin with a wafer-lot, typically 25 Bayer silicon wafers. These wafers undergo over hundreds intricate process steps, spending 9–16 weeks at a minimum. The journey starts with oxidation, photolithography, and etching to define the circuit pattern. Cleaning and iron implantation follows. Modifying electrical property. Then, metal and dielectric deposition build the layers needed for interconnect and insulations. Each step must be precisely controlled to ensure the device performance and the reliability. Once fabrication is complete, we will go through a wafer acceptance test, also called Class Probe, where individual device, like a transistor and the capacitors, are sampled to verify the quality.
If they pass the wafer move to the Unit Probe, where every dye is tested at the circuit label. This rigorous process ensure that only high-quality wafers proceed to packaging and final testing. Understanding this flow is essential because any process change or technology transfer must maintain integrity across all these steps. That is why robust evaluation and automation are so critical in our industry.
Let's compare process change and technology transfer. Process change aims to improve specific steps within a fab, figured by internal issues, and involve specific validation method and approval flows. In contrast, technology transfers aims to expand capacity or move from to production, driven by strategic decisions and involves broader process flows and more complex approval processes. Despite this difference in scope and intent. Both floor share a critical similarity. They rely on the same experimental and analytical backbone. Design of experiment, statistical analysis, and qualification gates are central to both. Whether you are improving a process locally or transferring it globally, the rigor of data-driven validation remains the same. This shared foundation is what make workflow automation so powerful. It standardizes and accelerates this common step across both scenarios.
Can't evaluation involves optimizing key parameters and ensuring they meet qualification criteria. Standardized formats and workflow can improve efficiency. This slide outlined the structure of flow we use to evaluate the change in semiconductor manufacturing. It starts with DOE analysis, where we optimize models based on target key parameters to improve end-of-line performance. Each design may require a unique approach, but the goal is consistent. Identify the most impactful variable. Next, we compare electrical parameter's distribution to ensure all key metrics meet Cpk and the Delta Sigma criteria. This step benefits greatly from standardized formats and automatic workflows, which improve both efficiency and the repeatability. Then, we generate Cpk and yield report to assess how well the optimized parameter aligned with qualification gate criteria. This is where we validate where the proposed change are statistically sound and production-ready.
Finally, based on all the data, the team formulate a recommendation for the next cycle of learning. This flow not only ensure robust evaluation, but also support continuous improvement, making it applicable to both process change and technology transfers. Here, the continuous improvement loop involves experimental production, focused data analysis, and the performance review. If performance improves, we iterate with the next experiment. If not, we expect an eliminated outlier by finding the root cause. This process help us manage and improve our manufacturing operation effectively. Now, it's important to understand the operational context. Process changes are typically governed by the change action board or CAB, and often receive support from manufacturing IT to build a standardized analytical platform. In contrast, technology transfer are more ad hoc. They may involve evaluating more than just control versus proposed sales or handling multiple changes on a single wafer-lot. This complexity often don't meet the return of investment threshold for IT department to invest in a universal solution. That's where engineers can take initiative by using JMP script, automation, and workflow tools to build their own efficient repeatable analysis platforms. This empowers teams to maintain rigor and speed even in non-sender scenarios. As an example, we will look at technology transfer to show how workflow can improve a device engineer's efficiency.
Moving on to slide 9, we will outline the milestones and gate reviews for a technology transfer process. It begins with project initiation and the planning, where we define the scope, objectives, secure approval for concept, definition and planning. Once the project is ready, we move into the validation phase. This includes readiness check, process freeze approval, and the clearance for risk production. This gate ensures the process is stable, and the risk are understood before scaling.
Finally, we reach the release and the closure phase, where the project is formally released into production and the exit from the development cycle. Each gate serve as a checkpoint to maintain quality, alignment, and accountability across teams. Understanding this framework help us position the detailed performance review steps that follow in the next slide.
Here is the comprehensive performance review. It begins with the site-level PC data extraction, followed by data cleaning and processing. Then we move to descriptive statistical analysis, report generation, distribution, comparison, and finally, capability and the yield analysis. Each of these steps is critical to ensuring that the transfer process meets quality and reliability standards. Now, while every block in this flow can benefit from JMP automation due to time constraints, we will focus today's demo on just the two blue-highlighted blocks.
Descriptive statistical analysis and the distribution comparison of electrical parameters. In the first, JMP allows us to quickly summarize key metrics, mean, standard deviation, and yield across lot and wafers. This helps establish a baseline and identify early trends. In the second, we use JMP to prepare parameters distributions between sites or conditions. Use statistical test to detect significant shift. These steps are foundational for validating alignment between source and the receiving fabs. By automating this task, engineers can generate consistent, accurate insights in minutes, freeing up time for deeper analysis and decision-making.
Let's now take a closer look at how this work in practice. In this slide, I will discuss the experimental analysis to identify key process factor and the corresponding wafers that match our goals. We will focus on the variability analysis for X_344, highlighting the target value and the switching point for different parts with 25 wafer-lots. In this case of improving X_344, it takes 5 months from design to complete the wafer process and experiment. It took another month for myself, after all the end-of-line parametric data were collected to generate a report and be ready for the next experiment discussion. This analysis helped us optimize our processes and ensure we meet our performance target. The cycle time is just a little bit too long for the data analysis and reporting.
Here, I just want to show you that for all those analysis, we have listed the electrical parameter used in the evaluation. These are categorized into P1 reliability and the optional P2 parameter during production and quantification. The full W80 electrical list is divided into P1 and P2 categories. P1 includes reliability critical parameters, further spread into WAC and monitor list. While P2 cover optional metrics for extended analysis. These are assessed at a PPA and R and E gaze. Below the worksheet, summary outlines key deliverable, Pass/Fail Result, Ppk Reports, improvement plan, train charge, and lot traceability. This framework ensures consistent evaluation and supports data-driven decisions throughout the technology transfer process.
Here is the Cpk and yield report for this parameter evaluation. It provides insights into process capability for all the other parameter. That is not our focus parameter. Highlights any parameter with poor performance for further review. This slide present the summary report required to pass any technology transfer gate based on company-defined criteria. This report is essential for gate review. It provides a quantitative foundation for decision-making and ensure that transfer process meet performance and the reliability standard before ramping to productions.
This next slide present a risk-based framework for evaluating electrical parameter distribution during technology transfer or process change. It used two key metrics, delta sigma, which measure of variability between sides, and the process capabilities, Cpk, which reflects how well parameter stay within specs. Parameters are classified into five risk level. Risk level one means no action needed. Risk level two suggests discussion with the business line. Risk level three and four indicate the need for action or deeper review. And the risk level five flag a critical issue requiring mandatory action.
For this example, we are working on the APF gain evaluation. During the APF gain evaluation, both risk level one and three are considered passing category, and others are failing. This metric help team prioritize resource, align decisions, and maintain quality standard during evaluation. Comparison of electrical parameter distribution between fabs help identify variability and alignment issue. This is the report we will be demonstrating live today. It is a critical tool used by the transfer team to evaluate and improve end-of-line parametric performance matching between the mother and the child fab. The table compared electrical parameters across multiple lots, showing key metrics, like lower spec limit, upper spec limit, mean, standard deviation, and Cpk for both the reference fab and the transferred fab. We also include Delta significance and assigned risk level from risk level one to risk level five, helping us quickly identify which parameters are well-aligned and which required father actions.
This report is especially variable because it provides a quantitative foundation for decision-making. It helped engineers pinpoint mismatch, understand variability, and prioritize corrective action. By automating this analysis in JMP, we can generate this insight quickly and consistently, supporting faster learning cycle and more reliable technology transfer.
This slide present the outcome of the previous distribution comparison. Another view of parameter distribution comparison. Use this to prioritize engineering resource. Here, I have listed the worst-performing parameter, those classified as risk level five. This parameter show poor alignment and the whole process capability, making them critical for review. The chart visualize their position Solution in the high risk zone, and the table highlights specific metrics like Cpk and delta sigma. In the next slide, we will dive deeper into these risk level five parameters by examining their trend chart. This will help the team assess stability over time and guide next step for improvement.
This slide present trend chart for two of the risk level five parameter listed. We use X_30 and X_349 as example. That were highlighted in the previous distribution comparison. This help visualize performance shift and identify critical concern. It can be wider distribution plus mean shift, or it's just a mean shift itself. We have added the baseline data from the mother fab to enable video comparison with the receiving fab. This chart help us assess how well the transfer process is aligning over time by tracking variability and the group means, so we can identify persistent mistakes or improvements. This visualization is essential for guiding corrective action and the validating where weather adjustment are moving us closer to the target.
This is our last chart here for you to share. Delta shift plot show changes between the current integration flow, WD, and the previous one, WC. Focus on parameter moving between good and bad zone. While earlier, we focus on the high-risk level five parameters and their alignment with the mother fab baseline. Here we take a broader view. Management and the transfer team also want to understand how the newly identified process factor in the WD performed compared to WC. This help assess whether the integration is truly improving over parametric behavior. We are looking at shift in Cpk and the delta sigma across wafers, identifying which parameters are moving from poor zone to good ones or vice versa. This comparative analysis is essential for guiding future integration strategies and validating the effectiveness of process changes.
Now we will move into live demo. Martin, you will showcase how to build a workflow using JMP19 Pro. This demo will take about 10 minutes.
Thank you, Su-Heng.
No problem.
Let's just take a look into that. As you saw in the presentation of Su-Heng, there were several steps to be done, and it has been done in the past in Excel sheets, and you got for the risk level reports, the two plots next to each other, and you have to understand those shifts in addition. There were several steps involved in this whole process, and they have to be done repeatedly over and over again. As Su-Heng mentioned, one month of analysis time is quite long. We tried to reduce that to a reasonably good amount of time to make it more efficient. The steps were basically when we create the mother fab summary data table, we need to get this mean and the standard deviation. We also need to get the Cpk data. We get that from reports. We have to do the same for the current design and the previous design. Then we need to combine all those data into one big data table so that we can create the risk level reports for the current design, for the previous design, and the shift report, which with the help of JMP19, the Graph Builder allows us to visualize the shift directly in the Graph Builder instead of having those two plots next to each other and try to understand, where is this point? Where is this point? That's something about it. Then for the parameters of interest, we also create the trend chart afterwards.
Let's just run this whole thing so that you get a sense of what it does and that this all together reduce the steps they used for several hours just to a few clicks, basically. We select first the mother fab data, and then it will take the summary analysis and create a data table. Then we should need to collect the current design data. This, again, does the same thing, creates this whole thing. And finally, we will get the previous data, a design data. And then after the test collected that information, it combines the data table again, creates this risk level dashboard and the additional trend charts here for the parameters. I don't go into the trend charts right now. I just want to highlight the dashboard here because you have on the right side, the current design, risk level report with your risk levels, and you see the passes for one and three, so everything which is in this zone here. We have the same in the upper field here for the previous design, and we can also see those shifts here in all those several ranges.
We have several ways to filter that down. We can go from no changes, so where we say we are in the same risk level, but we want to see if something has moved, at least to a better focus, at least. We can take a look into those. But what's more interesting is, of course, the changes. If we think about the previous design of risk level one, which was a path, so a good one. Do we see any current design which was going to the worst? We see some of them. This is the most critical one, probably because it's in red, so risk level five. It moved from risk level one to risk level five. Definitely something to look into from the previous to the current design and some others as well. You may want to look into those. But we can also filter for a certain parameter value at all.
Or another way would be, if I just unhide that for a second, just go into our most critical ones from the current design and see what they did, how they moved, where they gone through, and link through the several things. Depending on what view you want to look at the risk level change, this helps you to really focus on the things you want to visualize instead of looking at all the dots and compare, this one has a risk level change. It's not a risk level change, and those are the variables. You want to see what data point is it on the other side. We can see it's still here. You can go one by one, or you have all together and filter it down. That was one of the big movements here.
Finally, you get your final data table with all those risk level colors, with additional information about what has and other things. How do we get to that thing? Let's just open up a new workflow builder. I open up a new workflow. You find that under File, New Workflow. Then let's just start doing the things. Before I do something, I always clear the history, so I don't have anything in my log history. First, I open something up, and that's the mother fab data because I want to do something with it. I say, okay. It asked me, should I record already those steps? I can say yes. I will talk a little bit about tips and tricks, what makes sense, what not.
The next step would be, in this case, to create the report. For sake of time, I just have the final reports here. This is the Cpk report. I can close it down, and you see it takes the report snapshot and calculates that already and the tabulate as well. For that, already, we can take a look into what has happened in the background. Here's the open table. You can manipulate and change the description and notes as well. This is the script if you want. You see it's a hard-coded file here. I want to go with the first tip here. The first thing is, if I want to open a data table, and it's not always the same, you may just say, I don't want to have this step because in the next step, it will ask for this data table Austin mother fab data. If that's not there, it will ask you what to do.
Let's just run this part of the script. I open that up. It asks me what data table I want to use. That's a data table I don't have open yet, so I want to choose that. That already creates my two steps here. I see it's all worked quite well. Now I have to combine those tables. I can go to my tabulate and say, make that into a data table. The same I do it here, make that into a data table by in this case, right-clicking, make into a data table. You see both have been added here.
Now, let's take a look into some additional things here. You see that here it takes the column group PC. Where did it get that? Basically, it took it from the data table, and this is a good way to structure the table. It put all the columns of the analysis columns in a grouping which is called PC. And that makes it easy because if I don't do that, and some platforms are not handling that because the tabulate takes all the variables here as a category, it takes them here into it. What you can do is basically you can take this column group option, put it here, and instead of all those parameters, we just say it should take the column group. Why is this helpful? Because it is independent from the naming of the columns, and that makes it very helpful. Grouping the columns in your table makes a There's a big difference when working with any automation as well. That's something what you can do here.
Now, let's compare it in addition with the original data workflow I have created, and I take a look into that. You see that we have to create the mother fab summary data. Let me just move that into the presentation mode to the normal mode, so I can see that next to each other. Here I see I have I used the Column Group PC as well. I created the other thing as well with the Column Group PC. I did some additional thing here. I created an object here. Let me just bring that in here as well. That's the tabulate, sorry. That was the right thing. I added this object here, reference, which is not here right now. This is basically because I want to use that to have a helper file or helper data table, which I then give a specific name. That name I can reference later when I want to combine data tables with an identifier or a unique naming in that sense. That's why I did that, and then I closed the report with that. You may want to combine it with some scripting. You may need to if you want to make very complex things. But for most of the things, I could just work with the workflow I just created here.
A last comment before I hand back to Su-Heng is, I personally don't use the recording I check if I have a complex workflow like you see on the left side already with several steps, I keep the things done, and they will be added to the block history. I only take those things I am interested in and bring it up to the workflow. Then I can take that, give notes to it, and then when I'm fine, I may group the selected to something which is my first step, for example. I changed the group name to first step and give it some additional wording here so that everything is clear what happens in this case. And that's what I did on the left side. That's just a few tips so far. With that, I had back to Su-Heng for the wrap-up. This is the final slide.
Thank you so much, Martin. You only take a few minutes to finish my 1-month work. Wonderful. To wrap up today's presentation, highlighting how JMP scripting and workflow builder empower engineers to perform repeatable and scalable analysis across sites and projects. By automating a workflow, we significantly reduce manual effort. What used to take days can now be done in minutes. Standardized format and evaluation criteria not only improve efficiency but also ensure consistence and compliance, especially during technology transfer. This allowed engineers to shift their focus from data from start crunching to designing better experiment and accelerating learning cycles. Importantly, automation support both structured process change and more ad hoc technology transfer scenarios. With improved data integrity and traceability, team can make faster and more confident decisions. Thank you for your attention. Let's open the floor for questions and discussion.
Presenters
Skill level
- Beginner
- Intermediate
- Advanced