Choose Language Hide Translation Bar

Novel Coatings Development: The Importance of Including Auxiliary Responses - (2023-US-30MP-1457)

The development of innovative new products can be accelerated using statistically optimized DOE and regression modeling. With the goal of maximizing efficiency and reducing expense, it is often tempting to limit the collected data to key product attributes, such as customer specifications or internal quality metrics.  However, increasing the number of available responses by including a wider range of more fundamental measurements in the analysis can often be critical to success.

 

This talk covers examples from projects at PPG’s Coatings Innovation Center. We highlight the use of JMP from design through analysis to visualization of the results for a fractional factorial and a constrained mixture/process design. Using tools such as column switcher, multivariate plots, pairwise correlation and mixture profiler, we demonstrate how the inclusion of ancillary responses helped to develop a deeper understanding of the systems being studied and pinpoint the causes behind unexpected results.

 

 

My  name  is  David  Fenn.

I  work  for  PPG  Industries

at  their  Coatings  Innovation  Center just  outside  of  Pittsburgh.

Today  I'm  going  to  be  talking about  some  of  our  experiences

with  using  DOE for  novel  coatings  development,

particularly  focusing  on  the  importance of  using  auxiliary  responses.

The  agenda,  I'll  talk  a  little  bit about  research  methodology,

and  in  case  you're  wondering what  I  mean  by  auxiliary  responses,

I'll  define  that  in  this  section.

Then  we'll  go on  to  two  examples.

The  first  one,  a  new  resin  development for  architectural  coatings,

and  the  second  one, a  protective  coating,

then  we'll  finish  off with  a  few  general  observations.

There  are  various  frameworks that  can  be  used  to  describe

the  new  product  development  process.

One  that  I  particularly  like is  shown  here,  DMADV.

Here  we  have  five  stages.

The  first  stage  is  to  define the  goals  of  the  project.

What  are  we  trying  to  achieve?

Then  we  get  into  measurement.

What  are  the  critical  characteristics we  need  to  measure,

and  do  we  have  suitable  processes in  place  to  measure  them?

Then  we  think  about  analyze .

What  factors  can  we  change to  make  improvements?

Then  onto  the  design  stage,

where  we  deliberately  manipulate those  factors

and  the  levels  of  those  factors

to  try  and  affect  an  improvement and  lead  to  optimum  performance.

Then  once  we  have  an  advanced  prototype, we  get  onto  verification,

thinking  about  will  our  solution  work in  the  real  world?

What  are  the  important  things we  need  to  think  about

when  we  apply  this  framework?

Well,  in  terms  of  the  defined  stage, the  goal  needs  to  be  clear

so  that  the  whole  organization  has the  same  understanding  of  the  goals,

and  it  needs  to  be  impactful.

If  we're  successful and  we  deliver  a  solution,

will  it  fill  a  real  unmet  need in  the  marketplace

and  be  a  successful  product?

If  we  skip  now  to  the  end,

any  solution  we  apply needs  to  be  cost- effective.

It  needs  to  be  robust.

Then  the  middle  of  this  process,

we  want  to  get  through  this  process as  quickly  and  as  efficiently  as  we  can.

We  want  to  deliver  the  product to  the  marketplace  as  soon  as  we  can,

and  we  want  to  expend the  minimum  amount  of  cash

and  the  minimum  amount of  resource  to  do  that.

Clearly,  DOEs  and  a  lot  of  the  tools that  are  available  in  JMP

are  well  set  up to  make  us  succeed  in  this  area.

One  of  the  tools  that  I  like  to  use

particularly  early  on  in  a  project is  a  process  map.

This  is  a  very  particular  type of  process  map.

It's  really  mapping

the  process  of  carrying  out the  research  and  development.

I'm  showing  here a  simplified  example  of  a  process  map

to  develop  an  automotive  base  coat.

We  have  all  the  steps that  are  involved  in  our  experiment.

We  make  a  resin,

we  use  that  resin to  make  a  base  coat  paint,

we  spray  apply  that  base  coat onto  a  substrate,

we  apply  a  top  coat onto  that  base  coat,

then  we  cure  those  coatings  together,

and  then  we  measure  the  properties that  we  get  from  all  of  that.

All  of  these  steps,

we  list  all  of  the  factors  that  might  play a  role  in  these  separate  steps.

This  is  useful  for  a  number  of  reasons.

First  of  all,

it  gives  everybody  in  the  team a  unified  understanding

of  what  the  process  is  we're  dealing  with and  how  are  we  going  to  affect  it.

It  also  allows  us  to  capture all  of  the  variables  we  can  think  of

that  might  play  a  role in  the  various  steps

so  we  don't  overlook  anything.

Then  it's  a  good  starting  point

for  thinking  about  which  of  these are  we  going  to  try  and  manipulate,

which  of  these  are  we  going  to  focus  on to  try  and  deliver  a  successful  project?

These  factors  are  further subdivided  and  categorized.

First,  we  have  our  Xs.

These  are  the  variables that  we  can  manipulate

to  try  and  affect  an  improvement in  our  product  or  our  process.

Then  we  have  our  big  Ys.

These  probably  appear in  the  specification  of  the  product.

These  are  what we're  really  trying  to  achieve.

This  is  what  the  customer really  cares  about,

what  the  customer  will  pay  for.

Next,  we  have  our  Ns, noise  variables.

These  could  be  variables that  we  may  be  not  controlling,

we're  not  deliberately  manipulating,

but  things  that  could  introduce noise  into  the  process,

either  during  the  experiments, during  the  new  product  development,

or  in  the  end  application,

in  the  manufacture  of  the  product or  the  end  use  of  the  product.

Then  finally, the  subject  of  today's  talk,

we  have  our  auxiliary  responses, which  we  label  as  little  Ys.

These  might  not  appear in  the  specification,

the  customer  might  not  even be  aware  of  these,

but  they're  measurements  we  can  take at  various  stages  of  the  process

that  might  tell  us  something about  what's  going  on.

said  in  the  previous  slide that  one  of  our  goals

is  to  get  through this  whole  process  quickly,

as in  efficiently  as  we  possibly  can.

One  question  that  raises  is, why  don't  we  just  measure  our  big  Ys?

We  have  the  ability  to  carry  out  DOE's.

We  could  optimize  for  our   big Ys, we  could  build  predictive  models.

Isn't  that  all  we  need  to  do?

Why  should  we  spend  time?

Why  should  we  spend  money measuring  some  of  these   little Ys

when  they're  not the  real  goal  of  the  outcome?

Well,  I  hope  in  the  next  couple of  examples  that  I  can  show  you,

some  cases  where  carefully  selecting these  little  Ys

and  doing  some  good  analysis

can  be  really  critical to  the  success  of  a  project.

Our  first  example  here,

the  development  of  a  new  resin for  architectural  coatings.

The  goal  was  to  come  up with  a  single  resin  that  could  meet

all  of  the  performance  requirements

across  several  product  lines in  several  countries.

Our  starting  point  was,

we  had  no  single  resin that  could  meet  all  those  requirements.

We  were  using  different  resins in  different  products,

different  resins  in  different  countries, and  we  needed  to  come  up  with  a  solution

that  allowed  us  to  reduce the  complexity  there.

Our  early  prototype  struggled in  a  number  of  areas,

but  one  particular  area was  tint  strength.

The  way  these  white  base  paints would  be  used

is  if  I  go  into  a  store  and  request

a  paint  of  a  particular  color to  paint  the  walls  of  my  house,

the  store  will  take  that  white  base  paint and  add  specified  amounts

of  concentrated  color  toners  to  that  paint to  create  a  specific  color.

It's  really  critical  to  be  able to  hit  a  target tint  strength,

which  is  the  measurement of  how  quickly  that  color  will  change

as  we  add  a  certain  amount of  a  particular  toner.

We  need  to  be  able  to  control  that and  hit  it  reproducibly

to  achieve  the  wide  spectrum of  colors  we  need  to  achieve.

We  also  had  a  few  issues in  terms  of  poor  heat  age  stability

and  poor  resin  reproducibility.

Our  approach  was  to  carry  out some  sequential  DOE's

to  learn  how  to  control  tint  strengths and  some  of  the  other  factors.

I'm  showing  the  progress  on  this  plot  at the  bottom  left-hand  side  of  this  screen.

Before  we  started  the  DOE's, just  some  of  the  exploratory  experiments;

the  orange  bar  represents  the  range of  tint  strengths  we  were  able  to  achieve.

We  can  see  that  is  far  below

the  target  range  of  tint  strengths  shown by  this  green  bar  on  the  plot.

As  we  carried  out  the  DOE's, we  learned  how  to  control  tint  strength.

We  were  able  to  increase  it until  towards  the  end  of  the  project

when  we  were  doing our  optimization  DOE's,

we  were  nicely  centered around  this  target  tint  strength.

We  were  able  to  build  predictive  models and  use  those

in  conjunction  with  predictive  models for  some  of  the  other  key  properties

to  identify  white  space  where  we  met  all of  the  target  properties  at  the  same  time.

But  rather  than  talk about  the  whole  project,

I  want  to  now  focus on  one  particular  DOE  that  we  carried  out.

The  goal  of  this  DOE  was to  confirm  and  quantify

something  we'd  observed  previously,

that  the  particle  size of  the  resin  we  were  making

was  a  big  factor in  controlling  tint  strength.

These  resins  are,  in  effect,

dispersions  of  little  particles of  resin  in  water,

and  it  was  the  size  of  those  particles that  seemed  to  be  important.

We  were  also  using

what  we  call  a  co-surfactant to  help  disperse  those  particles,

and  we  had  a  few  choices

about  where  in  the  process we  could  add  that  co-surfactant.

We  wanted  to  look at  a  couple  of  candidates

for  the  addition  point of  that  co-surfactant

to  see  if  it  affected  the  key  properties.

Then  finally,  up  until  this  point, all  the  resins we've made,

we've  made  at the  Coatings  Innovation  Center.

We  now  wanted  to  check,

could  we  make  these  resins  reproducibly across  three  different  locations?

The  DOE  we  carried  out  is  shown on  the  right-hand  side  here.

We  have  three  levels for  our  target  particle  size.

We  have  two  levels  for  the  addition  point of  the  co-surfactant.

That  gives  us a  full  factorial  DOE  with  six  runs.

Then  we  replicated  that  DOE across  three  different  laboratories.

I'll  go  straight  into  JMP

and  I'll  show  you what  the  data  table  looks  like.

You  can  see  here  we  have the  original  data  table,  the  DOE,

but  now  we  have

a  whole  collection  of  data that  we  gathered  during  the  DOE.

The  first  thing  we'll  do  is,

we'll  look  at  what  we  learned about  tint  strength.

I've  already  built  here a  reduced  model  for  tint  strength.

If  we  have  a  look at  the  effect  summary  to  start  with,

we  can  see  that  the  location of  addition  of  the  co-surfactant

wasn't  a  factor in  determining  tint  strength.

That  dropped  out  of  the  model.

But  we  do  see that  the  target  particle  size

and  the  reactor  location  were  factors, as  well  as  the  interaction

between  target  particle  size and  reactor  location.

If  we  look  up  at our  actual  by predicted  plot,

we  can  see  it  looks like  a  pretty  nice  model.

We've  got  a  nice  R-square,

and  everything  looks  to  be in  pretty  good  shape.

Then  probably  the  best  way

of  understanding  what's  happening and  what  this  model  is  telling  us

is  to  look  at  the  prediction  profiler here  at  the  bottom.

We  see  our  anticipated  effect of  target  particle  size  on  tint  strength.

As  we  increase  target  particle  size, we  get  higher  tint  strength.

Then  if  we  look  across at  reactor  location,

what  we  see  is  that   Lab A  and  Lab C are  giving  broadly  similar  results.

But  if  we  look  at   Lab B, first  of  all,

we  see  that  the  tint  strength that  we  get  from   Lab B

is  significantly  higher than  we  were  getting  from   Lab A  or   Lab C.

We  also  see that  the  dependence  on  particle  size

is  much  less  from   Lab B than  we  saw  from  the  other  two  labs.

This  was  a  problem  for  us.

Whenever  we  see  that  different  labs are  producing  different  results

with  the  same  resin and the  same  process,

it  can  be  a  really  long  task to  work  out  what's  going  on  here.

There's  so  many  potential  candidates

for  the  cause of  this  poor  reproducibility.

At  this  stage, we  were  very  concerned

that  it  was  going  to  take  us a  long  time  to  resolve  this,

that it  was  going  to  derail  the  project,

and  we're  going  to  miss our  target  launch  dates.

Before  we  went  into  any  specific  activity to  try  and  address  this  problem,

the  obvious  first  step  was  to  look

at  the  data  that  we'd  already  gathered in  this  data  table

and  see  if  there  were  any  clues that  could  maybe  give  us  a  hint

as  to  why   Lab B was  giving  different  properties.

Whenever  I  see a  wide  data  table  like  we've  got  here,

one  of  the  first  tools  that  I  always  go  to is  the  column  switcher.

The  way  in  this  case that  I  will  implement  this

is  the  first  step is  to  build  a  variability  chart

that  best  shows the  problem  that  we're  having.

I've  pre-built  a  variability  chart  here

where  I've  got  target  particle  size and  reactor  location  as  my  X-axis

and  I've  got  the  initial  tint  strength as  my  Y-axis.

The  first  task  is  to  get  this

into  a  format  that  the  best  represents the  problem  we're  dealing  with.

The  first  thing  I'll  do

is  swap  over  my  target  particle  size and  react or  location.

I'll  also  add  and  connect  the  cell  means to  add  some  lines  here.

Now  I'm  pretty  happy  with  this.

I  think  this  nicely  reflects the  problem  that  we're  dealing  with.

We  can  see   Lab A  and   Lab C very  similar  results,

but   Lab B,  higher  tint  strength

and  less  dependence on  tint  strength  and  particle  size.

Now  I  can  use  my  column  switcher,

and  what  this  will  allow  me  to  do is  keep  this  plot  in  exactly  this  format,

but  quickly  switch  out  this  Y-axis, the  initial  tint  strength,

for  any  other  variable that  I've  got  in  my  data  table.

I'll  go  into  the  redo  platform and  select  the  column  switcher.

Now  I  can  select any  of  the  other  factors  in  my  data  table.

I'm  just  going  to  select  everything that  I've  got  in  my  data  table.

Then  when  I  hit  OK,

I  now  have  this  column  switcher to  the  left  of  my  plot.

I  can  click  on  any  of  these  factors

and  it  will  change  this  axis  but  keep the  plot  in  exactly  the  same  format.

If  I  select  particle  size,

I  can  see  now  I'm  plotting my  actual  measured  particle  size

against  target  particle  size and  reactor  location,

exactly  the  same  format.

It  looks  like  in  this  case,

all  three  labs  are  giving pretty  similar  results.

I'm  not  seeing  anything  that  gives  me a  clue  as  to  what's  going  on,

but  I  can  quickly  just  scroll through  this  whole  data  set.

I'm  seeing  mostly  noise  right  now.

I'm  not  seeing  any  patterns that  seem  to  be  particularly  helpful,

but  I'll  keep  going.

When  I  get  to  this  plot  here, so  now  I'm  plotting  conductivity,

I  see  interestingly that   Lab B  is  making  resins

with  much  higher  conductivity than  Lab A  and   Lab C.

That's  one  useful  observation.

I'll  keep  going.

This  next  one,

this  is  actually  another measurement  of  conductivity

after  the  resin  has  been in  a  hot  room  for  a  week,

showing  the  same  thing,

still  confirming  that   Lab B is  giving  higher  conductivity.

I'll  keep  going.

Mostly  noise, maybe  a  little  bit  of  an  indication

that  the  molecular  weight from   Lab B  is  slightly  lower.

I'll  keep  going.

Again,  still  not  seeing  anything that  interesting,  mostly  noise.

But  then  I  get  to  this  plot  here, and  again,

now  we're  plotting  the  pH of  the  resins  one  hour  into  the  process,

so  early  into  the  process, the  acidity  or  pH  of  the  resin.

Lab B,  again, is  different  from   Lab A  and   Lab C.

It's  giving  me  much  higher  pH.

Keep  going  just  to  check if  there's  anything  else.

This  was  the  initial  plot we  started  with  of  initial  tint  strength,

and  then  the  last  one is  the  paint  viscosity,

where  everything  looks  pretty  similar.

Really  quickly  using  column  switcher,

I  found  out  that  not  only  is   Lab B  making resins  with  higher  tint  strength,

it's  making  resins with  higher  conductivity

and  higher  acidity,  higher  pH.

What  could  that  be  telling  us?

What  might  be  causing  higher  pH and  higher  conductivity?

Well,  these  resins,  I  said,

were  a  dispersion of  a  polymer  particle  in  water.

Anything  that's  changing  the  conductivity, the  pH  is  going  to  be  in  the  water  phase.

It's  not  going  to  be  in  the  resin  phase.

What  we  did  was  we  precipitated by  centrifuge,

we  precipitated  out  the  resin and  just  analyzed  the  water  phase.

We  carried  out  a  lot  of  analysis, but  one  of  the  things  we  worked  on,

I'm  showing  on  this  plot on  the  right-hand  side  of  PPMs,

of  parts  per  million of  phosphorus  and  sulfur

in  that  water  phase.

If  I  look  at  the  orange  bars to  start  with,  the  sulfur,

I  can  see  all  of  the  resin s from  all  three  labs  are  very  similar,

but  the  blue  bars, the  level  of  phosphorus,

Lab  B  is  making  resins with  about  four  times  as  much  phosphorus

as  they  were  making from   Lab A  and   Lab C.

When  we  looked  at  the  recipe for  making  these  resins,

there's  only  one  raw  material  that  brings  in  phosphorus.

On  a  bit  of  further  investigation,

what  we  found  out  was  the  supplier that  was  delivering  this  material  to   Lab B

was  mistakenly  supplying  something

that  was  four  times  as  concentrated as  it  should  have  been,

and  four  times  as  concentrated as  they  were  providing  to   Lab A  and   Lab C.

The  auxiliary  data  that  we  looked  at in  this  DOE  using  the  column  switcher,

we  were  able  to  really  quickly  pinpoint the  cause  of  that  problem.

We  didn't  have  to  expend time  to  get  there.

The  project  stayed  on  track, and  there  was  even  a  bonus.

We  learned  that  increasing  the  level of  this  material  with  the  phosphorus

was  another  tool  we  had to  increase  the  tint  strength.

We  would  have  probably  never been  aware  of  that

if  we  hadn't  carried  out  this  analysis and  had  this  happ y  accident.

That's  the  first  example.

We  go  on  to  the  second  example  now.

In  this  case, we're  dealing  with  a  protective  coating,

a  coating  that's  designed  to  go over  metal  substrates  like  iron  and  steel

and  protect  from  corrosion .

We  have  five  experimental  resins that  we  want  to  look  at,

a  resin  that's  designed to  give  good  corrosion,

and  then  four  resins

that  are  designed  to  improve the  flexibility  of  the  coating.

The  first  three  of  these  resins

are  added  in  the  first  stage of  the  coating  preparation,

and  then  the  last  two  get  added in  a  separate  later  step.

We  have  two  questions  here we're  trying  to  answer.

How  do  the  resins  affect  corrosion and  affect  flexibility,

and what  is  the  best  combination of  the  levels  of  these  resins

to  give  us  the  best  combination of  corrosion  and  flexibility?

Again,  we  use  DOE, we  were  able  to  build  predictive  models,

and  here  we  were  using the  mixture  profiler

to  identify  some  white  space that  we  can  work  in.

This  DOE  is  a  little  bit  more  complicated than  the  first  one,

so  I'm  trying  to  represent  pictorially what  we  were  dealing  with  here.

If  we  look  at  our  first  stage of  our  coating  manufacture,

in  addition  to  our  three experimental  resins,

we  have  a  main  resin  at  a  fixed  level.

In  effect,  our  three  experimental  resins are  three  mixture  variables  here

because  they  form  the  rest  of  this  100 %.

They  add  up  to  a  constant  sum  of  57.77 %.

Three  mixture  variables dealing  with  there.

Then  in  stage  two,

we  can  deal  with  our  other  two  resins as  independent  variables

because  they're  not  part  of  that  mixture.

We  have  three  mixture  variables, two  independent  variables.

We  also  have  some  levels

that  the  formulators  were  able to  decide  they  wanted  to  work  in

based  on  prior  experience for  all  of  these  resins.

Then  we  have  some  constraints on  the  combinations  we're  dealing  with.

For  example,  at  the  start  here,

we  want  the  sum  of  Flex2  and  Flex3 to  be  more  than  10%  but  less  than  30%.

There  are  some  other  constraints  as  well.

A  fairly  complicated  DOE, but  using  custom  design,

it's  relatively  straightforward to  build  this  DOE.

Definitely  some  tips  and  tricks in  terms  of  how  to  build  the  DOE,

what  model  to  use, and  how  to  analyze  that  data.

I  don't  have  time to  go  through  that  today,

but  I'd  be  perfectly  happy

to  talk  about  that  offline if  anybody's  interested.

But  let's  go  straight  into   JMP and  we'll  look  at  this  example.

Here  we  have  the  DOE  that  we  carried  out.

It  was  a  16-run  DOE.

If  we  go  right  across to  the  left-hand  side,

we  have  our  three  mixed  variables and  our  two  process  variables.

We've  measured our  flexibility  and  corrosion

and  then  we  have  a  lot  of  other auxiliary  responses  we've  measured.

I  was  able  to  build  good  predictive  models for  flexibility  and  corrosion.

What  I'm  going  to  do  is  just  show  you those  models  in  the  profiler

just  to  help  us  understand what  we're  learning  and  what's  going  on.

I'll  add  those  two  predictive  models that  I  built  to  my  profiler,

and  then  I  get  my  profiler  here.

I  can  see,  first  of  all,

I'm  plotting  flexibility and  corrosion  here.

Lower  numbers  are  better for  both  of  these  responses.

Lower  numbers  for  flexibility,

lower  numbers  for  corrosion are  what  we're  targeting.

I  can  see  as  I  add  my  corrosion  resin, if  I  increase  the  level,

I  get  better  corrosion  performance,

but  unfortunately, I  get  worse  flexibility.

The  opposite  is  true  for  most of  these  flexibleised  resins.

As  I  add  more  of  these,

I'm  getting  better  flexibility, but  worse  corrosion.

This  is  something  that's  very  common

in  coatings  development and  lots  of  other  areas.

Seems  like  there's  always a  pair  of  properties

where  if  we  improve  one  of  them, we  always  make  the  other  one  worse.

But  if  I  come  across to  my  Flexabiliser  4  resin,

something  really  interesting  here,

as  I  add  more  of  this  resin, get  better  flexibility,

but  I  don't  suffer  at  all in  terms  of  corrosion.

This  is  going  to  be  a  really  useful  tool

for  us  to  optimize  the  combination of  flexibility  and  corrosion.

But  I'd  like  to  understand  a  bit  more about  the  science  behind  this.

What's  happening?

What's  unusual  about  Flex4

that  allows  us  to  improve  our  flexibility without  degrading  corrosion?

Again,  I  want  to  use all  of  this  auxiliary  data

that  I've  gathered  in  my  data  table to  help  me  understand  that.

What  I  did  is, want  to  look  through  this  table,

and  I'm  going  to  use a  different  tool  this  time.

I'm  going  to  use  multivariate.

If  I  select  that,

this  allows  me  to  basically  look at  the  correlation

between  all  the  combination  of  factors that  are  in  my  data  table.

I'll  select  everything  that  I  measured

and  I'll  add  it  in  the  Y  columns and  just  hit  OK.

This  generates  my  multivariate.

The  first  thing  I  see  is  this  table  here where  I've  got  all  the  correlations

for  all  the  pairs  of  combinations of  the  factors  that  are  put  in  my  table.

I  can  see  there  are some  pretty  nice  correlations  here.

I'm  seeing some  fairly  strong  correlations,

but  it's  a  little  bit  difficult to  go  through  all  this,

a  bit  overwhelming  to  go  through  all  this and  pick  out  any  interesting  patterns.

I've  also  got  my  scatter  plot  here,

and  if  I  add  a  fit  line to  these  scatter  plots,

again,  I'm  seeing some  fairly  strong  correlations,

but  still  I  think  this  is a  bit  overwhelming  to  dive  straight  into.

The  tool  that  I  like  to  use  to  start with  here  is  pairwise  correlations.

If  I  select  that, this  generates  a  new  table  where  I've  got

all  the  possible  pairs  of  variables and  it's  giving  me  the  correlation.

I  can  sort  this  table  based  on  any  column.

I'm  going  to  sort by  the  significant  probability

and  I'll  make  it  a scending because  I  want

my  low  significant  probabilities to  be  at  the  top  of  my  table.

Then  if  I  hit  OK,  I  can  see  the  first and  strongest  correlation  I  get,

in  fact,  involves this  Flexibiliser  Resin  4

that  was  giving  us this  interesting  behavior.

I  can  see  a  strong  correlation with  the  secondary  or  TG2.

This  is  a  glass  transition  temperature.

The  glass  transition  temperature  is a  temperature  at  which  a  coating  changes

from  being  a  glassy  hard  material to  a  soft  rubbery  flexible  material.

My  Flex4  level  is  correlating  with  here

a  secondary  glass  transition  temperature that  I'm  measuring.

And  I  can  see  also if  I  go  a  little  bit  further  down,

my  primary  glass  transition  temperature,

the  main  glass  transition  correlates strongly  with  the  corrosion.

S cientifically,  I  think they're  interesting  observations.

What  I  did  based  on  that is  I  also  built  predictive  models

for  my  primary  TG  and  for  my  secondary  TG.

Now  I  can  look  at  my  profiler,

but  I  can  include  all of  my  four  predictive  models.

Now  I'll  include  the  two  I  did  before, flexibility  and  corrosion,

but  also  my  primary  TG  and  secondary  TG.

Now  what  I  can  see is  that  the  first  two  rows

are  exactly  what  we were  looking  at  before.

If  I  look  at  my  primary  TG,

can  see  whatever  I  do in  terms  of  adding  new  resin.

For  example,  if  I  add more  of  my  corrosion  resin,

I'm  increasing  my  primary  TG,

and  that's  correlating with  an  improvement  in  corrosion.

The  flexibilising  resins, if  I  add  more  of  those,

I'm  decreasing  my  primary  TG and  making  my  corrosion  worse.

That  primary  TG  does  seem  to  correlate, as  the  multivariate  is  showing,

correlate  very  well  with  corrosion.

If  I  look  at  my  Flex4  resin, it  was  having  no  effect  on  corrosion

and  it's  having  no  effect on  my  primary  TG,

so  it's  different  from  my  other flexivising  resins,

but  I  can  see  for  my  secondary  TG,

as  I  add  more  of  my  Flex4, it's  rapidly  decreasing  the  secondary  TG.

The  other  resins  really  don't  have much  effect  on  secondary  TG.

What  does  that  mean?

What  can  I  learn  from  that?

Well,  any  material  that  has  multiple  TGs, glass  transition  temperatures,

it's  usually  a  sign  that  it's a  multi-phase  raw  material.

It's  not  a  homogeneous  material.

That  was  the  case  here when  we  did  some  microscopy.

What  we  saw  was  our  coating  had

a  continuous  phase  shown by  this  gray  material  here,

but  it  had  dispersed in  that  a  secondary  phase.

The  primary  glass  transition  temperature

was  correlating with  that  primary  continuous  phase

and  the  secondary lower  glass  transition  temperature

was  correlating  to  this  secondary  phase that  we  have  here.

We  had  a  hard  glassy  primary  phase and  then  a  soft  rubbery  secondary  phase.

Why  that's  important  is

usually  high  glass  transition  temperature does  lead  to  better  corrosion

because  it  inhibits  the  diffusion of  anything  through  this  layer

and  stops  material  getting to  the  substrate,

the  metal  substrate, and  causing  corrosion.

Usually,  if  I  want to  make  flexibility  better,

I  have  to  make this  continuous  layer  softer

and  that  degrades  corrosion.

But  with  this  type  of  morphology,

I  was  able  to  keep my  hard  continuous  phase

and  gain  flexibility  through a  separate  dispersed  rubbery  phase.

This  meant  that  anything  that  wanted to  diffuse  through  the  coating

and  cause  corrosion  was  always  having to  diffuse  through  this  high  TG  area.

It's  given  me  the  combination

of  good  corrosion and  good  flexibility  together.

The  auxiliary  data  that  I  gathered was  really  responsible...

The  analysis  to  that  was  responsible

for  the  learning of  what  was  going  on  in  this  system.

In  conclusion,

it's  definitely  possible to  carry  out  successful  DOEs

where  we  only  measure the  critical  responses,  the   big Ys.

But  I  hope  I've  shown  that  including carefully  selected  auxiliary  responses,

little Ys  can  often  be  really  valuable, can  bring  clarity  to  unexpected  results,

and  it  can  help  us  to  build scientific  knowledge.

I  hope  I've  also  shown  that  JMP  provides some  tools  that  really  help  us  with  this.

I've  shown  a  couple,  but  there  are many  more  that  are  available.

I'd  finally  like  to  finish  off by  thanking  the  many  associates

at  PPG's  Coatings  and  Innovation  Center who  contributed  to  this  work.