cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
Osisoft PI Connector in JMP® 17: Extending Easy Buttons for Data Access (2023-EU-30MP-1246)

Hadley Myers, JMP Systems Engineer, SAS
Peter Hersh, JMP Senior Systems Engineer, JMP

 

At the 2021 JMP Discovery Summit Americas, we presented a method for creating an “easy button” for data access, combining, cleaning, filtering, visualizing, analyzing, and generating new data. The use of the singular tense of “button” is not a typo. It only takes one button to perform any combination (or all) of these techniques, thus saving time and allowing problems to be diagnosed earlier. Informed answers to questions that lead to the best possible outcomes can be made faster, ultimately saving costs and speeding products to market. We now utilize the new OSIsoft PI connector in JMP 17 to extend the sources of data that can be quickly and effortlessly imported and everything listed earlier, with just the push of a button.

 

 

Hi,  thanks  for  finding  our  talk  today.  Hadley  and  I  are  going  to  be  talking  about  making  an  easy  button  for  data  access.  Now,  this  is  a  talk  that  we  had  given  previously  at  a  former  Discovery  Talk,  and  we're  going  to  be  talking  about  how  you  can  extend  this  capability  with  our  new  OSI soft  PI  connector.  Hadley,  take  it  away.

Yes,  that  is  absolutely  right.  Before  I  move  into  what  we're  going  to  be  showing  you  today  and  what  you  can  use  yourself,  I'd  like  to  just  introduce  those  of  you  who  aren't  familiar  with  the  JMP  Analytic  Workflow.  E veryone  watching  this  talk  likely  already  knows  that  JMP  contains  all  of  the  analytic  capabilities  necessary  to  take  any  data  that  you  have  in  any  raw  format  and  transform  it  into  insight  that  can  then  be  shared  throughout  an  organization.

What  we  are  going  to  be  focusing  on  today  is  the  data  access  and  the  data  blending  and  cleanup  aspects  of  the  analytic  workflow.   Why  are  these  important?  Well,  any  problem- solving  effort  begins  by  collecting  and  compiling  the  data.  One  big  problem,  is  that  this  can  often  be  time- consuming  and  tedious,  especially  for  scientists  and  engineers  who  have  background  in  this  stuff.  What  this  effectively  means  is  that  it's  often  not  done  or  not  done  in  a  timely  enough  manner.   So  problems  can  get  unnoticed  and  problems,  therefore,  aren't  solved.

The  other  problem  is  that  data  can  be  found  in  many  different  places,  and  it's  an  effort  to  grab  all  of  this  and  put  it  in  the  right  format,  compile  it  together. A  solution  is  an  easy  button  for  quick  access  to  data  wherever  it  is.   What  we  have  got  and  what  we  are  going  to  show  you  is  a  simple  interface  built  using  the  application  builder,  which  is  a  simplified  strip  down  option  allowing  people  to  press  a  button  and  get  data  from  exactly  where  they  need  it  in  the  format  that  they  need  to  be  able  to  solve  their  problems.

They  can  pick  a  data  source  and  filter  what  is  needed  if  necessary,  even  combining  multiple  sources  and  automating  this.  As  Pete  mentioned,  what  we're  going  to  be  doing  is  we're  going  to  be  building  on  a  tool  that  we  had  previously  shown  which  used  SQL  web  APIs  and  even  manual  entries  as  well  as  combining  data  from  other  sources.  Where  have  we  shown  this  before?  We've  shown  this  in  a  previous  Discovery  Talk. So t hose  of  you  watching  this  can  look  at  the  past  Discovery  presentations  and  check  those  out  if  you  like.

What  we  are  going  to  do  now  is  take  it  one  step step  further  from  where  we  were  back  in  2022,  and  that  is  to  extend  it  to  data  contained  in   OSIsoft PI  servers.  We're  going  to  be  making  use  of  two  features  that  were  introduced  in  JMP  17.  There's  the  Connect  to  the  PI  Server  as  well  as  the   OSIsoft PI  Wizard.  With  that,  I'll  turn  things  over  to  Pete  to  demonstrate  that  functionality.

Thank  you,  Hadley.  Share  my  screen  here.  If  I'm  going  to  launch  that  PI  importer,  you'll  find  it  in  the  same  place  you'll  find  all  of  the  database  connectors.  Just  like  we  would  do  for  SQL,  you'll  go  under  file,  database,  and  import  from   OSIsoft PI.  You  enter  the  name  of  your   PI Server,  your  authentication  method.  Hit  okay.  Then  it  gives  you  this  nice  interface  here  and  you  can  browse  to  what  you're  interested  in  and  pick  out  a  couple  of  attributes  or  tags  that  you  want.

Let's  just  pick  one  for  now.  Then  I  can  select  what  my  start  time  is.  I'm  going  to  go  back  a  little  bit  in  time  and  shorten  this  query  a  bit  so  it  goes  a  little  quicker.  Once  you're  ready,  you  can  hit  Import.  This  is  a  big  improvement  over  what  you  had  to  do  before,  which  involved  a  fair  amount  of  scripting.  But  the  nice  thing  here  is  once  I've  imported  this,  everything  that  I  need  to  pull  that  up  again  is  captured  right  here  in  the  source  script.  So  if  I  hit  Edit,  you  can  see  all  that  was  needed  to  be  passed  into  that  PI  data  source  was  right  here.  Hadley  is  going  to  take  this  now  and  start  to  make  our  easy  button.   I'm  going  to  stop  sharing  and  pass  it  back  to  Hadley.

All  right,  thanks  for very m uch,  Pete.  I'm  going  to  go  ahead  and  share  my  screen  once  again  and  show  you  that  what  we  are  going  to  do  is  we're  going  to  take  that  script  that  Pete  just  generated  using  the   OSIsoft PI  Import  Wizard  in  JMP 17  and  turn  that  into  a  simple  add- in  that  literally  anybody  could  use  to  select  whatever  tags  they  need  and  then  grab  that  data.   If  you  know  what  server  it's  coming  from,  you  know  what  the  configuration  is,  you're  always  grabbing  the  same  data  in  exactly  the  same  way,  the  only  thing  that  might  change  is  the  tags,  then  this  may  be  a  stripped  down  simplified  solution  that  anybody  could  use.

Of  course,  if  you  had  other  things  that  you  wanted  to  filter  on,  like  timelines  and  stuff,  that's  easy  to  include  as  well.  And  if  you  wanted  to  take  this  a  step  further  and  combine  these  data  sources  and  maybe  do  some  automation  on  them  or  automated  analysis.  That's  an  easy  step  from  there,  and  Pete will  show  you  how  to  do  that  a  little  bit  later.

But  what  I'm  going  to  do  is  I'm  going  to  take  the  source  script  that  was  used  to  generate  this  data.  I'm  going  to  copy  it  and  paste  it  into  a  JMP  script.  Now  when  I  run  the  script,  it  goes  back  and  collects  the  data  from  this  tag  IA  right  there .  It  could  very  well  be  that  there  are  multiple  tags  that  you  would  like  rather  than  just  one.  Maybe  you  have  a  list  of  tags  that  you  need.

What  I'm  going  to  do  at  first  is  I'm  going  to  define  a  tag  list  which  may  contain  the  tag  IA  as  well  as  IB  and  IC  and  as  many  more  as  I  feel  like  including.  This  would  be  a  good  option,  is  if  you  were  always  getting  the  same  tags  every  time.  It  didn't  need  to  select  them  they're  always  the  same.

Here  they  are.  What  I'm  going  to  do  is  I'm  going  to  run  this  for  each  one  of  these  tags  in  this  tag  list.  To  do  that,  I'm  going  to  make  use  of  another  relatively  recent  addition  to  JMP.  Only  I  think  it  was  added  in  15  or  16,  I'm   not  quite  sure,  but  that's  the  For  Each  function.   For  each  tag  in  my  tag  list,  run  this.  My  tag  is  going  to  be  here. I nstead  of  running  IA,  we're  going  to  just  concatenate  the  tags  and  then  go  ahead  and  run  that.

Excuse  me.  There  we  have  it.   It  really  is  just  that  simple.  It'll  take  a  few  seconds,  but  there  we've  got  our  text.   That's  a  good  solution  if  you  were  always  running  the  same  text.  But  if  you  wanted  to  take  this  functionality  and  extend  it  a  bit  to  allow  a  user  to  select  some  tags  using  these  configurations,  I'm  going  to  take  this  code  and  I'm  going  to  run  it  or  set  it  up  in  application  builder.

Now,  rather  than  hard  coding  a  list,  I'm  going  to  ask  the  user  to  select  the  list  from the  list  here.  We'll  just  add  a  few  tags  to  that.  Let's  add  tag  IA,  IB,  IC,  and  maybe  one  more  kilowatt  A.  There  they  are.  Now  we'll  just  add  a  button  that  the  user  can  press  to  grab  whatever  they've  selected  in  the  list  and  then  get  the  tags.   Button  1  is  a  good  variable  name,  but  we  need  a  bit  of  a  better  descriptor  so  the  user  know  what  to  do.  There  we  have  it.  When  we  press  this  button,  we  are  going  to  have  it  run  the  script  that  we  just  wrote.  Of  course,  instead  of  getting  our  tags  from  this  tag  list,  we  are  going  to  have  it  grab  whatever  a  user  selected  from  our  list  1  list  box.

Can  it  really  be  that  simple?  Yes,  it  can, and  yes,  it  is.  Of  course,  if  we  wanted  to  extend  this  functionality,  at  this  point  in  the  sky,  our  imagination  and  our  needs  are  the  limit.  At  this  point,  I  will  pass  things  back  to  Pete  to  show  you  how  you  can  go  ahead  and  do  that.

Wow,  Hadley,  that  really  does  look  easy.  Very  nicely  done.  Why  don't  I  share  where  we  went  from  here?  Actually,  let  me  share  this  screen  here.  All  I  did  was  take  what  Hadley  had  shown  and  add  a  few  more  tags.  The  next  thought  is,  "Hey,  that's  great  that  the  PI  importer  is  bringing  in  these  tables  individually,  but  what  happens  if  I  want  to  bring  them  together?" L et  me  just  show  what  this  does  first.  I'll  do  that  data  poll.  This  is  what  Hadley  showed.  Then  I'm  going  to  do  the  next  step,  which  is  a  data  compile.

Now,  this  takes  advantage  of  the  workflow  builder,  and  we'll  go  ahead  and  walk  through  and  actually  write  this since  it's  really  easy  to  do.  I'm  going  to  just  pick  a  couple  of  things  to  compile  here.   There  you  go.   I'll  show  you  how  all  of  this  was  done.  Okay,  so  basically,  what  JMP  has  done  is  it  went  through  and  it  grabbed  a  bunch  of  those  data  tables,  it  concatenated  them,  then  it  split  them  apart,  and  all  of  these  steps  are  here.  So  it  concatenated  those  data  tables,  it  split  them  apart,  then  it  recoded  those  column  names,  and  finally,  it  made  just  a  simple  report.

So  let's  walk  through  how  we  would  do  this  inside  of  the  workflow  builder.   I'm  going  to  close  out  of  these.  I'll  minimize  this,  and  I'll  just  start  with  those  three  tables  that  were  pulled  from  the  data.   Here  I  have  IA,  IB,  and  IC  metrics  that  I'm  looking  at.

To  start  a  workflow,  you'll  find  it  under  File,  New,  and  Workflow.  A ll  this  is  doing  is  it's  grabbing  stuff  out  of  the  log  when  I  tell  it  to.   If  I  hit  record,  it  will  capture  all  the  steps  that  I  do  to  any  table  manipulations,  any  joining  or  splitting  of  tables,  any  renaming  or  recoding  of  variables.  All  of  that  will  be  captured  in  here.  Let's  start  with  that.

The  first  thing  I'm  going  to  do  is  concatenate  these.  U nder  the  Tables  menu,  Concatenate.  I  have  A  there.  I  want  to  add  B  and  C.  I'll  give  it  a  name  here.  We'll  just  call  this  Stacked  Data  and  hit  okay.  There  you  can  see  that  this  was  stacked  and  it's  captured  here  as  well.  Everything  I  needed  to  do  there  was  captured.  I  want  to  back  up  here.  You  can  see  anything,  while  that  recording  is  going  on,  is  captured.

Let  me  back  up,  start  over,  show  this  one  more  time.  With  that  off,  it  won't  record  anything.  With  that  on,  it  will, so  here,  we'll  do  that.  Tables,  Concatenate  again.  I  forgot  to  click  one  button  there.  I  want  to  add  a  source  name,  so  we'll  do  this  again.  Again,  call  this  Stacked  Data  and  hit  okay.

T hat  was  my  first  step.  Now,  the  next  thing  I'm  going  to  do  is  split  this  apart  because  I  actually  don't  want  it  stacked.  I  want  them  together  in  the  same  table,  but  I  want  to  split  it  now.  So  we're  going  to  go  to  Tables  and  Split.  I  want  to  Split  by  that  source  column,  which  is  why  I  didn't  have  that  in  the  first  time  I  did  it.   Here  we  can  see  this.  I'm  splitting  by  source  column.   This  is  also  a  new  feature  here.  It  gives  us  a  nice  preview.

Now,  this  was  something  that  I  don't  know  about  everyone  else  in  here,  but  I  used  to  struggle  with  this.  I  wasn't  quite  sure  what  I  was  going  to  get,  especially  with  things  like  transpose  and  split,  join  some  of  the  more  complex  table  formulas.

Here  I  have  all  of  my  500  rows  of  data  for  each  of  these  different  metrics,  but  what  I'm  missing  is  a  time  stamp.  Without  having  that  before,  I  might  have  done  this  wrong,  but  now  I  want  to  group  this  by  time  stamp.  All  of  these  now  have  a  time  stamp  associated  with  that  particular  metric.  Now  I'm  going  to  just  call  this  Split  Data  and  hit  okay.

There,  back  to  my  workflow,  you  can  see  I've  concatenated,  I've  split,  but  now  I  have  this  big  ugly  column  name  that  I  don't  want.  T here's  a  nice  feature  inside  of  JMP  to  recode  these  column  names.  I f  I  go  to  Columns  and  Column  Names,  there's  a  Recode  Column  Name,  and  this  works  just  like  recode  for  your  normal  data.  I'm  going  to  do  a  little  advanced  extract  segment  here.  I  want  to  pull  out  a  portion  that  just  looks  at  the  very  end,  and  that  looks  like  the  right  values  and  I  hit  okay.  Then  I'm  going  to  hit  recode,  and  there  we  go.

Then  the  last  thing  you'll  notice,  again,  this  is  all  captured.  The  last  thing  I  want  to  do  here  is  make  that  graph.  I'm  going  to  just  go  graph  and  I'm  going  to  grab  those  metrics,  A,  B,  and  C,  and  then  map  them  out  by  timestamp.

Now,  you'll  notice  one  thing.  This  is  not  added  to  the  workflow  yet,  so  I  can  hit  done  and  it's  still  not  added.  The  reason  that  is,  is  I  could  still  be  making  changes  to  this.   Maybe  I  don't  like  this  name.  I  might  want  to  call  it  metric  versus  time.  Maybe  I  don't  like  the  format  of  these  time  stamps,  so  I  can  change  that.  But  I'm  doing  all  of  these  changes  and  the  workflow  doesn't  capture  it  until  I  close  this.   When  I  hit  Close,  there  you  go.

I'm  going  to  stop  recording  now  and  I  will  go  ahead  and  close  these  two  new  tables  that  were  made  and  show  you  that  this  works.  There  we  go.  Okay,  so  now  you  may  be  asking,  "Well,  why  would  I  use  a  workflow?  Why  not  just  use  a  script?  What's  the  advantage  of  that?" So l et's  close  out  of  this  and  show  you  why.  I  gave  you  an  accidental  preview  of  this  earlier,  but  we'll  show  you  here.

We're  back  to  our  application  here.  I  went  through  and  I  pulled  these  three  tags,  and  then  when  I  concatenated  them,  it  was  looking  for  those  three  data  tables  with  those  three  names.   If  I  pulled  different  tags,  so  let's  say  I  just  want  all  of  this  data,  so  everything  with  an  A  at  the  end  here,  I'm  going  to  do  a  data  poll  on.  If  I  was  using  a  script  and  I  was  looking  for  those  specific  data  file  names,  the  workflow or  a  script  wouldn't  work,  but  the  workflow  has  this  generalizability.   If  I  look  here  in  this  concatenate  tables,  it's  looking  for  three  tables,  IA,  IB,  and  I C,  and  I  don't  have  those  tables  open.  I  have  AI  open,  but  I  have  two  other  tables  open. L et's  see  what  happens  when  I  run  this.

It  prompts  me.  It  says,  "Hey,  what  data  do  you  actually  want  to  compile?  You  have  different  data  sources  here."  I  actually  want  to  compile  these  three  that  I  have  open.   Now  it  says,  "Oh,  wait,  I  couldn't  find  the  column  names."  Again,  when  I  went  through  and  I  recoded  columns,  if  I  was  running  a  script,  it  would  potentially  just  wouldn't  work  because  it  didn't  find  that  column  name.

But  here  it  says,  "Hey,  I  can't  find  this  column  IB.  Which  one  is  that?"  Let's  just  use  a  replacement  column.   There  we  go,  and  it  worked.  What  this  is  doing  is  it  has  the  ability  to  be  generalizable  with  this  reference.   By  default,  a  workflow  has  this  ability  to  have  a  replacement  reference,  and  I  can  manage  this.   Here  you  can  see  here  are  the  tables,  and  I  can  prompt  you  to  pick  those  tables.   Then  here  are  the  columns  that  are  referenced,  s o  I  can  have  substitutes  there.

Unlike  a  script,  this  will  prompt  you  if  it  doesn't  find  what  it's  looking  for.  So  it's  very  nice  in  that  aspect.  T hat  was  basically  how  to  build  a  workflow  and  then  use  that  to  compile  data  and  have  it  be  generalizable.  I'm  going  to  pass  it  back  to  Hadley  here  for  some  closing  thoughts.

Thanks  very  much.  Let  me  just  share  my  screen.   In  summary,  making  an  easy  button  for  data  access  solves  some  problems.  It  makes  a  lot  of  things  easier.  What  it  does  is  it  addresses  difficulties  in  assessing  data  because  problems  persist  longer  than  needed.  That's  what  happens  when  you  don't  have  access  to  the  data.   Getting  the  data  in  the  right  format  is  really  80 %  of  a  solution.  Once  you've  gotten  the  data  collected and  formatted,  compiled  it,  cleaned  it,  and  then  doing  the  rest  of  it  is  really  the  fun  and  easy  part.

Creating  these  buttons  allows  data  to  be  quickly  and  easily  imported.  It's  possible  to  add  filters,  not  something  that  we  showed  today,  but  if  you  go  back  and  look  at  our...  Well,  I  guess  the  selecting  from  the  list  was  one  of  the  filter  options.  Of  course,  you  can  always  add  others.  You  can  see  that  from  the  previous  presentation  that  we  did  back  in  2022,  as  well  as  extending  this  to  SQL,  web  API,  and  any  other  place  that  your  data  may  be.

There  are  two  add- ins  on  the  community  that  we'd  like  to  mention.  There's  the   OSIsoft PI Importer  as  well  as  the PI  Concatenator,  you  can  find  those  things  here.  If  you  just  Google  these  or  look  on  our  community, JMP. C ommunity.j mp.c om.  Thank  you  very  much.