Choose Language Hide Translation Bar

Fault or Anomaly Detection by Upgrading Control Charts with Changepoint Detection Function (2022-US-30MP-1121)

Anudeep Maripi, Magic Leap

 

This paper presents how Magic Leap’s Eyepiece Manufacturing team leverages the changepoint detection and correlation matrix functions in the multivariate control charts to easily detect process drift, diagnose issues, and detect the exact moment when an issue occurred -- a previously impossible functionality. Magic Leap’s lithography double-side imprinting process requires precise wafer placement, highly accurate photoresist dispension, and master-template to wafer alignment. Process drifts such as robot wafer placement errors, wafer alignment variation, master-template alignment errors, and measurement metrology variability can cause excursions that result in significant yield loss. These process drifts are often subtle, gradual and interdependent on other parameters that traditional control charts fail to detect.

 

Our team used changepoint detection and correlation matrix to create an interactive dashboard that collects changepoint time stamps for output parameters and creates a phase in the existing control charts for input parameters. Multiple changepoints are handled using phase columns and correlations with input parameters, template changeovers, and PM or hardware upgrade activities. The dashboard’s correlation coefficient matrix between inputs and outputs compares the correlation before and after a detected changepoint. This dashboard became our daily driver to quickly find faults/process drifts and achieve high yield standards.

 

 

Hello,  everyone. This  is  Anudeep  Maripi.

I'm  a  senior  process  engineer for  optics  lithography  at  Magic Leap.

Today,  I'm  going  to  demonstrate

fault  detection by  upgrading  control  charts

with  change  point   detection  in  JMP,

and  show  you  how the  change  point  detection  integration

with  control  charts

is  highly  useful on  our  manufacturing  shop  floor.

I'll  start  the  presentation

with  a  brief  introduction about  our  company,  Magic Leap,

and  the  optics  manufacturing,

the  DMAIC  problem- solving  approach

and  the  JMP  tools  that  we  use in  the  manufacturing,

our  challenges  that  we  face in  the  control  phase  of  DMAIC,

upgrading  existing  control  charts with  change  point,

and  by  doing  so,  detecting and  diagnosing  the  faults  in  a  process

some  case  studies  that  we  regularly  use at  Magic Leap  and  a  JMP  demonstration

on  how  to  create a  change  point  dashboard

and  integrate  change  point into  your  control  charts,

and  few  takeaways.

To  start  with,  at  Magic Leap,

we  envision  a  world with  physical  and  digital  are one.

Our  mission  is  to  amplify  human  potential

by  delivering a  most  immersive  AR wearable  platform,

so  people  can  intuitively  see,  hear,

and  touch  digital  content in  the  physical  world.

Our wearable  device transforms  enterprise  productivity

by  providing  AR  solutions.

For  example,  collaboration  and  co- presence

between  remote  teams  to  work  together as  if  they  are  in  the  same  room,

3D  visualizations  to  optimize  processes,

augmented  workforce that  helps  train  and  upskill  workers

using  see- what- I- see  capability.

These  are  only  a  few  examples.

The  device  has  industry's  leading  optics with  large  field  of  view,

best  image  quality,

high  color  uniformity,

and  dynamic  dimming

which  helps  display a  high- quality  virtual  content

in  our  real  world.

The  optical  lens that  is  used  in  this  device

is  a  small  component

but  a  very  critical  component of  Magic  Leap  2,

enabling  best- in- class  image  performance.

The  optical  lens goes  through  25  plus  complex

manufacturing  process and  metrology  stations

during  the  manufacturing.

It  is  tested  over  100  plus critical- to- quality  parameters.

Each  individual  process  needs  to  reach high  99%  yield  targets

to  attain  90% Rolled  Throughput  Yield  targets.

We  cannot  achieve  these  high  yields and  process  capability

without  the  help  of  JMP  statistic  tools.

One  of  these  high- yielding  processes is  imprint  lithography

where  I' m  responsible for  quality  and  delivery  tools.

This  process  has  many  steps,

and  that  is  visualized on  the  right- hand  side  of  your…

The first  step  is  where  we  align the  mask  to  the  template

and  then  dispense fo r  resist,

lower  the  template  and  EUV  cure to  replicate  the  exact  pattern

from  the  mask  template  to  the  wafer.

This  imprint  process  alone has  22  critical- to- quality  parameters

that  operates  between  tight  control  limits such  as  just  20  nanometers  film  thickness

and  100  microns  drop  accuracy.

As  you  could  see, these  tolerances  are  extremely  small.

For  reference,  human  hair is  just  20  microns  to  100  microns.

Thus,  the  imprint  lithography  requires a  precision  placement  and  alignment

that  are  indistinguishable to  the  human  eye.

Meaning  we  rely  a  lot  upon onboard  metrology  and  vision  systems

that  generate  tons  of  data from  large  numbers  of  input  parameters.

JMP  helps  us  to  easily  analyze these  large  data  sets

in  our  problem- solving  processes.

We  use   DMAIC  approach in  our  problem- solving  process.

Following  are  the  JMP  tools

that  we  use  in  every  phase of  our DMAIC  processors,  respectively.

Like  many  manufacturing  teams,

we  found  out  Control  phase as  most  challenging  and  overlooked  phase,

because  at  a  start up  like  Magic  Leap, we  continuously  evolve,  make  improvements,

and  iterate several  new  designs  and  revisions.

The  control  phase  is  overlooked

because  it  takes  longer  time for  the  confirmation.

For  example,

we  find  a  problem  or  an  issue, say  like  process  exceeded  control  limits

and  giving  low  CPK  values,

resulting  in  low  CPK during  our  Control  phase.

We  define  the  problem,

we  measure  the  historical  data,

analyze  the  problem

by  multivariate  methods and  correlation  matrices,

improve  the  problem by  implementing  the  corrective  actions,

by  conducting  DOEs and  response  screening.

But  once  this  is  resolved,

we  move  on  to  our  next  severe  problem and  move  on  to  severity  3  problem

once  you  resolve  the  severity, severity  2  problem,

and  the  cycle  continues,

overlooking  the  most important  phase  of   DMAIC,

which  is  the  control  phase.

This  was  the  reason  we  came  up with  the  change  point  detection  dashboard,

which  is  fast  and  efficient.

It  pinpoints  issues  faster, easier  to  visualize  faults  and  changes.

Even  operators  who  are  not  proficient with  the  statistics

can  use  this  dashboard to  diagnose  the  issues.

For  example,  before  introducing this  change  point  dashboard

on  our  manufacturing  shop  floor,

the  escalation  progress is  like  the  left  sideway,

where  the  operator  monitors the  yield  dashboard

and  identifies the  yield  losses  and  faults.

Operator  reports  to  technician to  look  into  the  faults.

If  technician  unable  to  resolve escalates  the  engineer,

engineer  analyzes  the  control  charts  data using  control  charts  alarms  and  trends,

and  then  manually  join  these  outputs to  inputs  and  do  the  correlation

and  find  out  and  diagnose  the  fault.

And  then  engineer  implements the  corrective  actions.

But  after  introducing our  change  point  dashboard

on  the  manufacturing  floor,

the  operator,  technician,  and  engineers

monitor  the  change  point  dashboard on  the  shop  floor.

If  change  is  found,

they  look  into  correlation  of  inputs before  the  change  and  after  the  change

and  easily  diagnose  the  issue,

and  then  easily  improve the  corrective  actions.

This  helped  us  easily  to  transition into  our  TPM  model,

where  operators,  technicians  and  engineers

are  all  coming  together to  minimize  the  faults.

The  change  point  dashboard makes  our  control  phase  very  efficient.

I'm  going  to  demonstrate in  the  following  slides

on  how  this  is  efficient

as  well  as  how  to  make this  change  point  dashboards.

Before  showing  the  change  point  dashboard,

I  want  to  show  you  our  journey from  traditional  control  charts  monitoring

to  change  point  detection  dashboard.

While  the  traditional  control  charts are  very  helpful  to  monitor  excursions

in  our  process  using  Westgard  rules,

they  show  the  immediate out  of  control  points,

but  they  often  miss the  subtle  drifts  in  the  process,

which  are  very  gradual but  still  significant.

These  control  charts  are  very  helpful to  monitor  the  time  based

and  also  give  us what  parts  are  impacted  by  this.

But  as  I  mentioned  before,

we  have  22  critical  parameters in  this  process  alone

and  there  are  tons  of  input  parameters

that  impacts these  22  critical  parameters,

which  means  we  need  to  have many  control  charts  monitoring,

at  the  same  time, to  have  a  stable  process.

But  then  we  move  on  to  model- driven multi variate  control  charts.

These  control  charts,  we  monitor both  inputs  and  outputs  together,

and  we  immediately  get a  correlation  between  them.

But  unfortunately,  these  doesn't  have any  time  series  or   port IDs

to  help  and  deploy  this in  the  manufacturing  shop  floor.

The  multivariate  correlation is  very  important  for  us,

giving  the  relationship

between  our  outputs and  inputs  in  the  process.

And  this  again  doesn't  have the  time  series  data

like  the  control  charts to  identify  when  the  issue  has  started.

The  change  point  detection, whereas  we  found  very  helpful

to  easily  detect  when  that  fault  or  change

or  significant  abrupt  drift happened  in  our  process.

But  it  only  gives  a  value when  a  change  point  occurred,

it  doesn't  have  the  time  series and  not  very  valuable

as  a  monitoring  tool on  our  manufacturing  shop  floor.

Each  one  has  its  own disadvantages  and  advantages.

We  combine  all  of  these  control  charts into  one  dashboard

and  leveraging  the  control  point detection  functions  values  into  these.

Our  next  logical  step  is  regression which  is  cause  and  effect  analysis.

Find  out  the  cause  and  effect and  interlock  the  tool  inputs,

stopping  the  failure  on  your  CTQs critical- to- quality  parameters

or  on  your  outputs

and  also  move  to  prediction  analysis.

This  is  one  example  to  show  you how  the  control  charts  looks  different

when  you  integrate the  change  point  detection.

For  example,  on  the  left, I  have  a  control  chart

without  the  change  point  detection.

You  can  see  there  are  few  ups  and  downs between  the  control  limits.

But  when  I  take  this  value  of  256,

which  the  change  point  detection  gives  you when  you  run  for  the  same  data,

and  take  this  and  integrate into  the  same  control  chart,

you  could  see the  visualization  got  better.

In  this  case,  before  the  change  point, you  have  a  demo  of  variation,

you  have  a  baseline,

you  have  the  mean is  below  your  target  line,

and  after  the  change  point, your  mean  is  above  the  target  line  now

and  your  variation  has  reduced.

In  this  way,  you  could  easily  find  out and  visualize

what  is  the  change in  your  existing  control  charts.

Once  we  found  out how  valuable  this  change  point  is,

quickly  telling  us where  the  change  happened

or  where  the  fault happened  in  your  process.

We  started  using  some  use  cases.

We  started  with  some  use  cases such  as  this

where  we  introduced  a  change and  we  want  to  validate  the  change.

In  this  example,  we  have  a  process where  it  is  giving  low  Cpk  value

because  of  the  population  is  mostly distributed  towards  your  positive  side.

We  identified  it as  a  mean  shift.

It  needs  to  mean  shift,  and  we  introduced the  known  change  mean  shift.

When  we  introduced  this  mean shift,

you  could  see  the  process became  more  capable,

about  2.3  or  1.5.

You  could  see  when  we  deploy the  change  point  detection  function

for  the  same  data,

the  change  point  gives  you  value and  the  direction

whether  your  change  has  taken  place or  not  in  your  process.

This  is  highly  visual  for  us

to  evaluate  whether  my  known  change, which  I  just  introduced,

is  impacting  my  process, the  positive  or negative.

The  second  use  cases,  as  I  mentioned,

we  have  lot  of  onboard  metrology  tools in  our  process,

which  is  giving  a  lot  of  data.

We  do  periodical  measurements  and  testing in  these  metrology  tools,

but  we  always  rely  upon

the  repeatability and  reproducibility  data,

which  is  Gase  R&R  data

from  these  charts.

But  the  Gase R&R  data  is  not  finding  us any  faults  in  the  process

or  when  the  metrologist  tools started  drifting

in  either  positive  or  negative.

But  for  the  same  data,

when  we  deployed the  change  point  function,

it  immediately  tells  us when  the  subtle  changes

and  abrupt  significant  drifts happened  in  this  metrology,  too.

That  is  very  high or  highly  helpful  data  to  identify

and  make  sure  our  metrologist  tools  are still  stable  and  not  drifting  the  data.

The  third  and  most  important  one

is  to  detect  and  diagnose the  faults  with ease.

This  is  a  dashboard  that  we  worked  on to  deploy  on  our  manufacturing  shop  floor.

Our  workflow  would  be,  we  monitor multiple  response  parameters

or  critical- to- quality  parameters as  outputs,

and  then  we  identify if  there  are  any  changes

using  the  change  point  dashboard.

At  the  same  time,

we  also  look  into  model- driven multivariate  change  point  dashboard

where  it  will  tell  us what  are  the  excursions

and  easily  correlate  that  excursions to  the  input  parameters.

But  the  true  value  came  in

when  we  take  this  change  point  values

and  integrate  them into  our  existing  control  charts.

In  this  example,  we  have  a  baseline  data before  the  change  and  after  the  change,

which  means  when  the  fault  occurred, you  could  see  there  is  a  slight  mean  shift

and  there's  a  huge  variation in  our  process.

At  this  point, we  go  to  our  correlation  matrix.

It  is  correlating  our  output  parameters to  our  input  parameters  in  the  process.

The  correlation  matrix will  immediately  tell  you

which  factor  has  a  strong  positive or  a  negative  correlation

during  this  phase  when  compared to  during  your  normal  phase.

That  will  immediately  tell  you what  input  parameter  has  gone  wrong

or  that  might  have  contributed  a  lot to   this fault  area.

That  is  how  we  are  able  to  quickly  find the  faults  and  diagnose  the  issues.

From  here  on, I'm  going  to  demonstrate  in  JMP

on  how  to  create  these  kind  of  dashboards,

at  the  same  time, how  to  create  multiple  change  points.

As  you  could  see  in  our  previous  slides,

the  change  point  function is  giving  only  one  value.

Let  me  turn  on  to  JMP.

In  this  example,  I'm  using Boston Housing  data  which  is  from  1978.

This  is  one  good  example as  a  beginner  learner

for  correlation  and  regression.

Many  tutorials  use  this  example.

So  I  thought  I  could  use  this  example which  is  readily  available.

Everybody's  JMP  sample  data  sets and  see  how  the  correlation  is,

and  if  there  are any  multiple  change  points  in  the  data,

at  the  same  time, correlate  to  the  practice.

The  sample  data  table  looks  like  this.

In  our  case,

I  want  to  see  the  median  market  value or  median  prices

of  the  owner-occupied  homes  in  Boston and  see  how  it  is  variating  with  time.

At  the  same  time, there  are  corresponding  factors

like  what  is  the  crime  rate  in  the  town or  what's  the  pollution  rate  in  the  town

or  the  age  of  other  factors such  as  age  of  the  homes

or  dis tance   in  these  regions.

You  can  read  the  data about  these  column  names

in  the  column  notes that  is  presented  over  here.

I'm  going  to  run  the  script which  gives  us…

Before  running  the  script, I  also  want  to  tell  you

that  I'm  attaching  the  entire  script as  part  of  my  presentation,

where  you  could  go  through  the  script or  run  the  script

and  it  will  give  you  a  nice  dashboard about   Boston Housing  values  variation

and  also  a  correlation.

You  could  download  the  script and  look  into  my  score

and  you  could  use the  same  methodology  and  the  logic

on  how  I'm  able  to  detect multiple  change  points

from  the  dashboard  using  the  JS  Subscript.

When  I  run  the  script,

runs  and  I  get  a  nice  dashboard  like  this.

As  I  mentioned,

our  problem- solving  framework would  be  like  look  at  the  change  points

in  our  median  value of  the  homes  in  Boston.

You  could  see  it  gave  us  the  median  value that  there  are  a  big  change  here

and  it  tells  you  the  change  point appears  to  be  on  row  373,

and  also  there  are several  second of  small  change.

When  you  look at  the  model- driven  control  chart  data,

you  could  see  there  are  high  excursions

in  this  medium  value of  the  housing  markets.

You  could  easily  tell what  caused  this  excursion

by  looking  into  your  inputs  parameter, that  is  crime  rate.

You  could  hover  on  any  of  these and  find  out  what  is  impacting

these  huge  excursions

in  your  monitoring  variable, which  is  median  value  of  the  houses.

As  I  mentioned,  the  real  value  comes  out when  we  take  these

and  integrate into  our  control  charts.

See  this  one, this  control  chart  will  give  you

these  integrated  change  points  from  here.

We'll  give  you  a  good  visual.

You could see  now in   Boston Housing  market  example,

we  have  a  baseline  housing median  value  changes,

a  baseline  value  at  20  million.

And  then  when  the  housing  boom  happened, or  when  the  change  point  1  occurred,

you  could  see  there  is  a  good  amount of  variation  in  the  housing  market.

At  the  same  time, there  is  a  mean  shift,  too.

But  soon,  there  is  another  change  happen and  the  housing  market  crash

resulting  in  your  median  value is  below  your  baseline.

This  is  easily  telling  us, giving  us  a  visual  into  parts

where  what  is  the  baseline  data, when  the  first  change  occurred,

what is and  how  much  is  the  change?

The  second  change  occurred and  how  much  is  the  change  now.

You  can  compare with  your  previous  two  changes.

If  you  as  a  prospect  to homebuyer in  Boston

and  you're  looking  into  this  kind  of  data,

you  want  to  know   what  might  have  happened that  have  impacted  these  kinds  of  changes.

In  your  data  set, you  have  some  nice  factors

about  the  crime  zone,

the  crime  rate  in  that  same  area in  the  same  period  of  time,

and  the  pollution  rates,  et cetera.

If  you  look  at  our  before  change

you  could  see  the  market  value, how  it  is  correlating  with  your  factors.

In  this  case,  you  can  see  there's a  good  amount  of  negative  correlation

with  the  crime

and  the  negative  correlation with  the  industry  area  and  also  pollution.

And  there  is  a  strong  positive  correlation which  is  about   0.8  with  this  phase.

Now, look  at  the  second  change  point  1  phase

where  the  housing  boom  occurred [inaudible 00:22:46]   mean shifted  upwards.

You  could  see  in  this  time  frame

your  crime  rate  is  low or  no  correlation  with  these  changes.

Your  pollution  is  low  again,

but  your  rooms   still  same  strong positive  correlation  in  these  two  phases.

If  you  ask  me what  would  have  happened  this  time,

I  would  say   maybe  the  crime has  increased  in  the  area

or  the  pollution has  increased  in  the  area,

such  kind  of  dominant  factors that  are  visible ,

that  are  not  favorable for  a  prospective  home.

That's  going  on  to  the  housing  crash when  it  happened.

A comparison between  this  phase   to this phase,

change point  to see how  the  correlation  just  changed.

You  can  immediately  tell, "Okay,  my  comparison

with  the  median  market  value from  this  phase,

my  correlation with  the  client  is  pretty  low,

my  rooms  has  very  positive  correlation,

and  there  is  lstat  and  other  factors."

But  if  you  come to  the  change  point 2  time,

you  could  see  the  crime  rate has  increased  again,

your  pollution  has  increased  again,

which  resulted  in  the  negative linear  correlation  of  this  change.

When  you  look  at  your  rooms,

which  has  been  positively  contributing for  your  change  has  fell  down  completely.

In  this  way,  this  is  telling  you  a  lot of  information  data  about  these  changes

that  occurred  in  1978.

Also,  easy  correlationship  data with  your   factors

that  might  have  impacted  these  changes.

I'll  repeat.

Our  framework  is  detecting, for  any  time  series  data,

we  first  detect  number  of  changes,

if  it  is  one  change  or  two  change or  more  changes,

and  then  integrate  these  change  points

into  our  control  chart  to  get a  better  visualization  about  the  changes.

Is  it  a  mean  shift  or  variation shift, or  what  is  happening?

And  then  look  into  your  factors  that  might have  contributed  to  these  changes

and  see  what  factors a  strong  contribution  for  this  change

or   weak  contribution  or  no  contribution or  negative  contribution.

Now  that  you  know  the  crime  rate, your  market  value

—the  median  market  value is  impacted  by  the  crime  rate,

pollution,  and  the  rooms—

let's  look  at  a  simple  visualization.

When  you  go  here,  you  can  see this  is  a  simple  graph  plot.

I'm  taking  my  change  point and  visualizing  it  in  a  better  way.

I'm  comparing  my  median  value

with  the  nox,  the  pollution  value in  the  same  towns,

in  the  same  time  period and  the  rooms  and  the  crime  rate.

You  could  see  when my  housing  median  market  values  increase

or  the  housing  booming  phase,

you  could  see  the  pollution  is  very  low and  the  rooms  for  dwelling  has  increased

and  your  crime  rate is  at  least  possible  state.

As  a  prospective  homebuyer,

these  are  all  favorable  factors  for  me,

no  pollution  area, no  crime  area or safe  area,

and  then  I'm  getting  a  bigger  home with a  convenient  number  of  rooms.

But  let's  see,  in  this  phase, when  the  housing  market  crashed,

at  the  same  time, your  pollution  rate  has  increased.

The  number  of  rooms  for  dwelling has  decreased.

And  then  there's an  astronomically  high  crime  rate

which  is  impacting  the  same  thing.

This  is  a  clear  visual,

but  you  landed  up  in  this  problem or  with  this  kind  of  conclusion

with  the  help of  your  correlation  matrices.

Correlation  matrix and  then  a  change  point.

Your  correlation  matrix and  then  the  change  point

and  then  a  better  resolution  like  this.

Let  me  go  and  show  you

how  the  JSL  script is  pooling  multiple  change  points.

You  could  find  the  change  point  data

in  your  control  charts, multivariate  control  charts.

And  again, this  is  a  multi variate  control  chart.

You  could  use  as  many  as  variables

as  your  multivariate  control  chart input  parameters.

But  in  my  case, I'm  using  median  market  value,

and  I  get  a  multivariate  correlation and  a  change  point.

Clearly,  see,  it  gives  me  373  as  a  number.

How  I  am  taking or  extracting  this  373  number

is  using  this  kind  of  JMP  or  JSL can  extract  any  of  the  text  box  columns

from  any  of  your  reports and  then  put  it  into  your  data  table.

In  this  case, I  collected  this  data  point.

You  could  see   the  equation  1  holds the  phrase  call  or  the  text  box  call.

The  change  point  appears  to  be  373.

This  one  simple  code  will  hold or  extract  that  value  from  the  report.

And  then  I  take  that  into  my  columns

in  the  change  point  1  column,

and  I  extract the  numerical  value  out  of  it.

I  wrote  the  JSL  as  simple  as  possible.

I'm  very  beginner with  scripting  and  coding.

You  could  see   I  used  simple  formulas and  simple  methods  to  get this.

You  can  see  the  change  point  report is  only  giving  you

one  change  point  value.

But  you  could  see  in  your  process, there  are  other  changes,  too,

that  you  might  be  interested  in and  we  want  to  find  them  out.

What  I  do  is once  you  get  your  first  value,

you  hide  and  exclude  everything  after your  change  point.

After  this  373,  you  exclude  everything.

Now  you're  left  with  this  phase.

You  want  to  see what  is  my  change  in  this  phase,

and  the  cycle  continues.

For  example, in  my  equation  2  from  my  report  2,

I'm  pulling  the  text  box  phrase called  the  change  point.

Sorry,  this  is  hard  to  show.

When  you  hover  on  this,

you  can  see  that  variable has  the  phrase  [inaudible 00:29:49].

Variable  has  the  phase  of  the  change  point appears  at  row  157.

You  take  the  value and  then  put  it  in  your  column  call,

CP  Text,  something  like  that,

where  you  could  extract your  second  change  point.

Once  you  get  a  data  like  that,

it  is  very  easy  to  construct a  dashboard  like  this,

where  you  create your  control  chart  dashboard,

each  individual  components, model- driven  multivariate  control  chart,

and  create  the  dashboard  using the  new   dashboard  application.

Once  you  create a  nice  dashboard  like  this,

you  just  go  here  and  you  just take  everything  into  the  script.

I'm  switching  down  to  my  presentation.

I  hope  the   Boston Housing  data  example is  very  clear,

because  how  easily  you  can  detect changes  in  any  time  series  data

and  correlate  those with  your  input  parameters.

In  real  life  or  manufacturing  scenario,

you  would  have several  input  parameters  like  this.

In  this  case,  in  my  dashboard,

we  look  into  around  50  input  parameters comparing  with  our  output  parameters.

We  also  monitor the  multivariate  control  charge.

You  could  see   I'm  monitoring multiple  CTQs  together

instead  of  each  individual traditional  control  chart.

I'm  exporting  these  values into  my  existing  control  chart

and  then  immediately  diagnosing  my  faults.

This  dashboard,  as  you  can  see, it's  pretty  self- explanatory,

easy  to  read,  easy  to  deploy on  your  manufacturing  floor.

Technicians  even  not  comfortable with  statistical  analysis,

and  doing  this  kind  of  building control  charts  and  everything,

they'll  just  look  at  the  process saying  that,

"O kay,  I  have  two  changes  in  my  process,

and  let  me  see what  are  the  input  parameters

that  are  correlating  these  changes and  see  if  I  can  change  them

or  manipulate  them  so  that  my  process back  to  stable  state like this.

Again,  the  correlation  coefficients are  pretty  easy  to  explain  to  anyone

on  the  manufacturing  shop floor.

For  example,  a  strong  positive  correlation would  give  you  a  blue  number  closer  to  1,

or  a  strong  negative  correlation would  be  something  like  -0.7,

but  the  variation would  be  something  like  this

when  you  increase  something.

With  subtle  bit  of  variation,

you  would  see a  negative  correlation  there,

and  then  a  weak  negative  correlation would  be  something  like  this,

a  lot  of  variation instead  of  a  straight  line like this.

And  then  there's  no  correlation.

The  number  would  be pretty  much  close  to  0.

You  would  see,

even  if  you  increase  one, the  other  might  be  decreasing.

It  is  very  easy  to  explain.

It  is  very  visual on  the  manufacturing  shop  floor

to  find  out  these  kind  of  numbers,

looking  to  see  what  input  has  changed and  what  input  has  a  strong  correlation.

There  is  an  added  bonus with  the  change  point  integration.

As  you  can  see, these  are  all  numerical  data,

which  is  getting a  numerical  correlation  like  this.

But  the  change  point  is  telling  you when  the  fault  has  occurred

and  when  you  know when  the  fault  has  occurred,

you  just  look  into  your  data  such  as template  change   or  your  load  port  IDs,

and  you  will  get  a  correlation  with  your non- numerical  data   [inaudible 00:34:10] .

You  would  start  a  discussion on  your  manufacturing floor.

Hey,  when  this  time  occurred, I  did  this  change  over  PM

or  the  tool  is  setting  idle,

or  I  did  a  prevent  day to  maintenance  at  this  exact  time.

Maybe  that  prevent  day to  maintenance  time,

some  issue  has  occurred.

This  is  giving  you  a  full  picture.

You're  looking  into   tons of  50  plus  input  parameters

and  easily  getting  a  correlation with  your  CTQs

and  easily  detecting  the  faults.

At  the  same  time,  you  have non- numerical  data  such  as  like  this,

and  you  find  out  and  you  relate  to like,

what  was  I  doing  when  this  change or  fault  has  occurred  in  my  process?

The  final  takeaway

is  by  integrating  change  point  function to  our  existing  control  charts.

We're  quickly  finding  faults and  diagnosing  them  faster,

saving  millions  of  dollars in  our  fast- paced  manufacturing.

With  JSL  scripting,

we're  able  to  find  out  there  are multiple  change  points  in  our  process

and  integrate  them to  our  control  chart  dashboards.

This  change  point  and  multivariate  methods can  be  used  in  any  time  series,

as  I  was  mentioning.

For  example, when  I'm  preparing  for  this  presentation,

I  gave  a  small  set  of  our  monthly  savings data  set  to  my  wife

who  is  not  familiar with  statistics  and  control  charts.

Upon  tracking  this  dashboard,

my  wife  easily  found there  are  change  points  in  our  savings.

But  at  the  same  time, she  did  not  like  the  fact

that  when  the  dashboard  pointed  out that  our  Amazon  spending

is  well  correlating for  our  low  saving  months.

As  you  can  see,  like  this  time, this  change  point,

you  could  use  it  on  any  time  series  data.

You  could  find  out  your  changes in  your  time  series

and  when  the  change  has  occurred

or  when  the  significant  change has  occurred,

and  then  you  can  correlate to  your  input  parameters.

In  our  case,  we  are  just  correlating your  Amazon  spending  or  take- out  dinners

and  several  other  factors [inaudible 00:36:18] .

That's  pretty  much,  and  I  hope you  found  this  presentation  very  useful,

and  you  also  learned  how  you  can easily  find  multiple  change  points

through  JSL  scripting and  create  easy  dashboards

to  where  demonstrators

to  find  out  faults and  easily  diagnose  them.

Thank  you.