cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
Torch Deep Learning Add-In for JMP Pro

 

Purpose

 

The Torch Deep Learning Add-In for JMP® Pro provides a no-code interface to the popular Torch  library for predictive modeling with deep neural networks.  The add-in enables you to train and deploy predictive models that use image, text, or tabular features as inputs to predict binary, continuous, or nominal targets.   

 

Key Features

 

  • Access to a wide variety of pretrained models for image and text data which can be fine-tuned or trained from scratch
  • Perform image or text classification or regression
  • Ability to customize deep networks for tabular data, as well as options to design your own customized convolutional neural nets for both time series and image data
  • A large number of advanced options to facilitate model building, assessment, and comparison
  • Repeated k-fold cross validation with out-of-fold predictions, plus a separate routine to create optimized k-fold validation columns
  • Ability to fit multiple Y responses with different modeling types in one model
  • Detailed and interactive graphical and tabular outputs
  • Model comparison interface
  • Profiling of tabular neural nets, aiding interpretability
  • Fast computation on an NVIDIA GPU, if available, on Windows

 

Installation 

 

You must have an authorized version of JMP Pro 18 or later.   Download TorchDeepLearning.jmpaddin from the link in this window on the upper right-hand side.  Launch JMP Pro, openTorchDeepLearning.jmpaddin, and click Yes to install.  The installed add-in is then located under the Add-Ins top menu (Add-Ins > Torch Deep Learning).

The add-in relies on a large collection of pre-compiled dynamic libraries and pre-trained Torch models which are not included in the add-in after initial installation.  Upon the first launch of the main add-in platform (by selecting Add-Ins > Torch Deep Learning > Torch Deep Learning) it will download and unzip an additional series of large files (approximately 4 GB total).   The downloads include CUDA-compiled libraries if you are on Windows and the total download and extraction of these files may take several minutes or longer depending on the speed of your web connection.    Please be patient and let these download to 100% and unzip fully.  All downloaded and extracted files are placed underneath the add-in’s home directory (click View > Add-Ins, select TorchDeepLearning, then click the Home Folder link). 

 

System Requirements

 

This add-in works with JMP® Pro version 18 only.  For smaller data sets, the minimum requirements for JMP® Pro should be sufficient; however, for modeling of medium- to large-size datasets, including those with a large number of images or unstructured texts, it is recommended that you have at least 32 GB of CPU RAM and a fast solid-state hard drive with generous storage (e.g. 1 TB) in order to handle the amount of data and parameters that are typically generated for deep neural net modeling. 

If you have a Windows machine with an NVIDA GPU, more details on configuring GPU processing is provided in the Torch Add-in documentation (Addins > Torch Deep Learning > Help).  On MacOS, both Intel x86_x64 and Apple Silicon arm64 (M1 M2, and M3) architectures are supported and should be detected automatically. 

 

How to Use It

 

After installing the add-in in JMP Pro:

  1. Open a JMP table containing a target variable Y to predict along with image, text, or tabular features to use as inputs.   
  2. Create one or more validation columns by clicking Add-Ins > Torch Deep Learning >  Make K-Fold Columns . specifying Y as the Response and Stratification variable (recommended).
  3. Click Add-Ins > Torch Deep Learning > Torch Deep Learning to launch the platform.    Assign variables to roles as in the following example:


The Add-Ins > Torch Deep Learning > Example Data submenu contains several example data sets and a JMP table named Torch_Storybook, which contains 40 examples of different kinds of scenarios in which the add-in can be helpful.  It includes rich metadata about each example and links to download JMP data tables with embedded scripts, example output JMP journals, and references.

The Add-Ins > Torch Deep Learning > Download Additional submenu provides a way to download several more image and text models that further expand the capabilities of the add-in.

Click Help from the menu to open detailed documentation on the add-in, including step-by-step examples and guidance for most effective usage.

 

Examples

 

russ_wolfinger_3-1710693803333.png

 

Background: As a chemist you need to predict the composition of unknown samples by comparing against the NMR spectroscopy of a mixture experiment where three components are altered in ratio.

 

Challenge: The output NMR spectra for mixture experiments are wide, with a small number of experimental runs, but 1000s of columns of unique features. Standard predictive modelling is not accurate and overfits and the process is time consuming to code.

 

Process: Take individual spectra and turned them into images with Graph Builder, using them to train an image classification model (convolutional neural net) to understand what the spectra look like at different ratios of the 3 components.

 

Outcome: Can apply unknown samples and accurately identify the composition/ratio of the 3 components.

 

russ_wolfinger_0-1710942712950.png

 

 

 

russ_wolfinger_4-1710693862608.png

 

Background: Wafer defects in semiconductor manufacturing can occur across the wafer in different areas. Pictures of the wafers or wafer maps can capture the area of the defects across different batches. Identifying the defect type(s) and the root cause is essential to suggest suitable solutions.

 

Challenge: You have >30,000 wafer map images, some with numerous defect types that all need to be accurately identified.

 

Process: Use wafer map images with known wafer defects to train deep convolutional neural nets.

 

Outcome: Build and deploy a deep learning model to accurately predict the most likely defect classes and probability of defects on new wafer images.

 

 

 

russ_wolfinger_5-1710693910996.png

 

Background: Online product reviews provide a large amount of text data that can be analyzed to understand the customers sentiment towards the product to make key decisions about product improvement, customer support and marketing strategies.

 

Challenge: Text reviews are unstructured, high volume and difficult to determine the sentiment and identify themes, requiring sophisticated analysis.

 

Process: Predict buying sentiment using deep language neural nets based on product review scores to identify key outcomes such as likelihood to re-purchase.

 

Outcome: Identify key sentiment towards product (i.e. likelihood to buy again, product issues) as critical feedback for product decisions.

 

 

 

Video Tutorials


Comprehensive Background and Overview with Examples


Tabular Data and Causal Inference 

Image Classification 

If you have a particular aspect of the add-in or topic on which you would like to see a video tutorial, please post in the comments.

 

Feedback

 

We welcome your feedback and encourage you to post comments here on this page.   Note that while add-ins, in general, are not officially supported by our JMP Technical Support team, this add-in was written by and is being actively developed by JMP Life Sciences R&D.

You may also send feedback, problems, questions, and suggestions directly to russ.wolfinger@jmp.com .  If reporting a potential bug or crash, please send as much information as possible to reproduce and diagnose it, including data and model setup, screenshots, error windows, log messages, and the operating system and version of JMP Pro you are using.

 

Acknowledgements and Copyright Notices

 

We graciously thank the PyTorch team for developing such great capabilities with a well-designed C++ interface.  See the Help document for a complete list of individuals and copyright notices as well as these online notices from the PyTorch Foundation.  

 

We also highly appreciate and thank Mark Lalonde and Francis Charette-MIgneault for their excellent contributions to LibTorch:
https://medium.com/crim/contributing-to-libtorch-recent-architectures-and-vanilla-training-pipeline-...  
https://github.com/crim-ca/crim-libtorch-extensions 

Comments
Residentx

Thanks for this!  

Am I reading this right?  In the documentation for Loading models into pytorch it says. 

In Python, you must recreate the exact same architecture components using PyTorch and then load these files in turn.

So pretty much no matter what we still need someone who knows pytorch and is able to translate what they did in JMP to python. Correct?  Like they need to know how to create the datasets, and the nn, and any validation that the JMP User did?