<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Converting Torch Addin files to a model in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/754581#M93675</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/17251"&gt;@lala&lt;/a&gt;&amp;nbsp; &amp;nbsp; Well I'm sure advanced traders have tried nearly every conceivable way of predicting future stock prices.&lt;BR /&gt;&lt;BR /&gt;Images are a certain way of encoding data using pixel locations and colors.&amp;nbsp; &amp;nbsp;While deep neural training on images is certainly different from training on the tabular time-series features, in the end similar input data in both forms will tend to produce similar results.&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;We're seeing a similar phenomenon when training on spectral data using either functional or image inputs in JMP.&amp;nbsp; &amp;nbsp;See the Torch Storybook for a few examples.&lt;/P&gt;</description>
    <pubDate>Wed, 15 May 2024 10:20:20 GMT</pubDate>
    <dc:creator>russ_wolfinger</dc:creator>
    <dc:date>2024-05-15T10:20:20Z</dc:date>
    <item>
      <title>Converting Torch Addin files to a model</title>
      <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/751107#M93246</link>
      <description>&lt;P&gt;Putting this here in case anyone is able to understand the docs more than me, but I'm trying to just load any model output from the torch addin into python.&amp;nbsp; Anyone have any luck doing this?&amp;nbsp; I'm hoping that I can just load the state dict of the given model I selected but that doesn't seem to be working.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;# %%
from pathlib import Path

import torch
from torchvision.models import efficientnet_b2
# %%
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_path = Path(__file__).parent.parent/"models/VI Models/E353 EffNetB2 MLP DOR20"
model_paths = list(model_path.glob("*v1f1*.pt")) # just pick one model for now

# %%
model = efficientnet_b2(dropout=0.2).to(device)
# %%
model_paths[0].relative_to(model_path) # WindowsPath('tdl_jmp_v1f1_jnet.pt')
# %%
model_part = torch.jit.load(model_paths[0], map_location=device)
model_part
# %%
model.load_state_dict(model_part.state_dict())
# this fails with 
# RuntimeError: Error(s) in loading state_dict for EfficientNet:
#     Missing key(s) in state_dict: "features.0.0.weight", "features.0.1.weight", "features.0.1.bias", "features.0.1.running_mean", .... very long list
#     Unexpected key(s) in state_dict: "0.0.0.weight", "0.0.1.weight", "0.0.1.bias", "0.0.1.running_mean", .... very long list
# %%
&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2024 18:45:35 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/751107#M93246</guid>
      <dc:creator>vince_faller</dc:creator>
      <dc:date>2024-05-01T18:45:35Z</dc:date>
    </item>
    <item>
      <title>Re: Converting Torch Addin files to a model</title>
      <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/751127#M93249</link>
      <description>&lt;P&gt;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/2610"&gt;@vince_faller&lt;/a&gt;&amp;nbsp; &amp;nbsp;Thanks for the post.&amp;nbsp; &amp;nbsp;After&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;PRE&gt;model_part = torch.jit.load(model_paths[0], map_location=device)&lt;/PRE&gt;
&lt;P&gt;the model is ready to go for predicting images.&amp;nbsp; There is no need to instantiate efficientnet_b2 from torchvision and load a state_dict.&amp;nbsp; &amp;nbsp;The just-in-time compiled model file contains both the model architecture and trained weights.&lt;BR /&gt;&lt;BR /&gt;Just run model_part(image) after setting up an image tensor.&amp;nbsp; &amp;nbsp;After this there are three more steps:&amp;nbsp; pooling, embedder, and final linear predictor.&lt;BR /&gt;&lt;BR /&gt;I'm attaching an end-to-end example notebook and example image, including steps to show intermediate outputs.&amp;nbsp; &amp;nbsp;Please change the suffix on deploy_image_model.py so the file is named deploy_image_model.ipynb&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2024 21:59:19 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/751127#M93249</guid>
      <dc:creator>russ_wolfinger</dc:creator>
      <dc:date>2024-05-01T21:59:19Z</dc:date>
    </item>
    <item>
      <title>Re: Converting Torch Addin files to a model</title>
      <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/751129#M93250</link>
      <description>&lt;P&gt;that's the same one that's in the documentation and doesn't really help.&amp;nbsp; How do we know from a screenshot of the Addin Dialog what any of these are.&amp;nbsp; like how do we know the normalization method or the imagenet normalization values to use?&amp;nbsp; How do we know which pooler to use (the architecture is supposed to be in JIT?)?&amp;nbsp; I'm confused why you're doing tnet and tnet1?&amp;nbsp; Just showing that they match?&amp;nbsp; Sometimes y'all use the 1 version of the layer sometimes you don't.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Maybe my problem is that all I get are these files and this screenshot.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="vince_faller_0-1714603144557.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/63865i1C6648CDD05B2954/image-size/medium?v=v2&amp;amp;px=400" role="button" title="vince_faller_0-1714603144557.png" alt="vince_faller_0-1714603144557.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Sorry for my ignorance.&amp;nbsp; And thanks for your patience.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 May 2024 22:44:38 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/751129#M93250</guid>
      <dc:creator>vince_faller</dc:creator>
      <dc:date>2024-05-01T22:44:38Z</dc:date>
    </item>
    <item>
      <title>Re: Converting Torch Addin files to a model</title>
      <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/752842#M93446</link>
      <description>&lt;P&gt;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/2610"&gt;@vince_faller&lt;/a&gt;&amp;nbsp;It is admittedly confusing and we'll be working to streamline it.&amp;nbsp; &amp;nbsp;There are basically four pieces in the following order:&lt;BR /&gt;&lt;BR /&gt;1. Image Model:&amp;nbsp; &amp;nbsp;architecture and weights contained in the *_jnet.pt model file.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;2. Pooling:&amp;nbsp; &amp;nbsp;Matches Pooling Layers specification&lt;BR /&gt;3. Tabular Model:&amp;nbsp; matches all other parameters, and final output has dimension embed_size, which is the last value of Layer Sizes.&lt;BR /&gt;4. Final Linear:&amp;nbsp; must be&amp;nbsp;nn.Linear(embed_size, y_size)&amp;nbsp; &amp;nbsp;where y_size is the dimension of Y that is being predicted.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;After each step you should be able to take input from the previous one and create new output and build up the model step-by-step.&lt;BR /&gt;&lt;BR /&gt;If something is not aligning, it can be very helpful to look at the state dicts and dimensions of output tensors to see where mismatches are and adjust accordingly--that's why we put those extra printout cells in the example notebook.&lt;BR /&gt;&lt;BR /&gt;If you have a new example you would like to see worked out, please send it.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 May 2024 10:53:48 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/752842#M93446</guid>
      <dc:creator>russ_wolfinger</dc:creator>
      <dc:date>2024-05-08T10:53:48Z</dc:date>
    </item>
    <item>
      <title>Re: Converting Torch Addin files to a model</title>
      <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/752858#M93453</link>
      <description>&lt;P class=""&gt;&lt;SPAN class=""&gt;Thanks to the efforts of experts, deep learning is possible at JMP.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;I would also like to ask a question about JMP image recognition, although I am still unable to obtain JMP Pro 18 software:&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;If the images I use to identify them are all generated from known data tables.&lt;/SPAN&gt;&lt;SPAN class=""&gt;When using the Torch plug-in for identification, can you import the original table data directly?&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;But I think it would be better to see it through an image, you know?&lt;/SPAN&gt;&lt;SPAN class=""&gt;Does that make sense?&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Thanks Experts!&lt;/P&gt;</description>
      <pubDate>Wed, 08 May 2024 11:50:29 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/752858#M93453</guid>
      <dc:creator>lala</dc:creator>
      <dc:date>2024-05-08T11:50:29Z</dc:date>
    </item>
    <item>
      <title>Re: Converting Torch Addin files to a model</title>
      <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/753194#M93503</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/17251"&gt;@lala&lt;/a&gt;&amp;nbsp; &amp;nbsp;Thanks for your question.&amp;nbsp; &amp;nbsp;I am not quite following what you mean by "import the original table data directly".&amp;nbsp; &amp;nbsp;If you mean importing images from .png or .jpg files on disk, that is fairly easy with File &amp;gt; Import Multiple Files.&amp;nbsp; &amp;nbsp;If this is not what you are asking, if you could please explain further or perhaps provide a small example I can comment further.&lt;BR /&gt;&lt;BR /&gt;In the mean time, even without JMP Pro 18, it might be informative to download Torch_Storybook.jmp from&amp;nbsp;&lt;A href="https://community.jmp.com/t5/JMP-Add-Ins/Torch-Deep-Learning-Add-In-for-JMP-Pro/ta-p/733478" target="_blank" rel="noopener"&gt;https://community.jmp.com/t5/JMP-Add-Ins/Torch-Deep-Learning-Add-In-for-JMP-Pro/ta-p/733478&lt;/A&gt;&amp;nbsp;(see link near upper right hand corner), open it in an earlier version of JMP, and click on various links of interest see how image data is being handled in various examples.&amp;nbsp; There are also links on the preceding page to a few video tutorials, including one on image classification.&amp;nbsp; &amp;nbsp;These may help answer your question.&amp;nbsp; &amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 09 May 2024 16:23:20 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/753194#M93503</guid>
      <dc:creator>russ_wolfinger</dc:creator>
      <dc:date>2024-05-09T16:23:20Z</dc:date>
    </item>
    <item>
      <title>Re: Converting Torch Addin files to a model</title>
      <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/753629#M93546</link>
      <description>&lt;P class=""&gt;&lt;SPAN class=""&gt;Thanks for the expert's reply.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;I came up with a more intuitive example:&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;Just like stock price charts from day to day, they are generated by changes in "Open","High","Low","Close","Volume" from day to day.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;It is possible to predict the future price change directly with the change rule of the original data, and of course, it is also possible to use the different forms of the stock price chart to predict through graph recognition.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;Just like that, JMP 18's Torch plug-in can directly import raw stock price data for training, omits the image recognition process.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;Is there a difference between the two types of training?&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;Thank you very much for the expert's help.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 12 May 2024 05:40:44 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/753629#M93546</guid>
      <dc:creator>lala</dc:creator>
      <dc:date>2024-05-12T05:40:44Z</dc:date>
    </item>
    <item>
      <title>Re: Converting Torch Addin files to a model</title>
      <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/754581#M93675</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/17251"&gt;@lala&lt;/a&gt;&amp;nbsp; &amp;nbsp; Well I'm sure advanced traders have tried nearly every conceivable way of predicting future stock prices.&lt;BR /&gt;&lt;BR /&gt;Images are a certain way of encoding data using pixel locations and colors.&amp;nbsp; &amp;nbsp;While deep neural training on images is certainly different from training on the tabular time-series features, in the end similar input data in both forms will tend to produce similar results.&amp;nbsp;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;We're seeing a similar phenomenon when training on spectral data using either functional or image inputs in JMP.&amp;nbsp; &amp;nbsp;See the Torch Storybook for a few examples.&lt;/P&gt;</description>
      <pubDate>Wed, 15 May 2024 10:20:20 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/754581#M93675</guid>
      <dc:creator>russ_wolfinger</dc:creator>
      <dc:date>2024-05-15T10:20:20Z</dc:date>
    </item>
    <item>
      <title>Re: Converting Torch Addin files to a model</title>
      <link>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/755860#M93796</link>
      <description>&lt;P&gt;&lt;SPAN class=""&gt;Thanks to the experts for the perfect answers.&lt;/SPAN&gt;&lt;SPAN class=""&gt;I first learn the plug-in example file code.&lt;/SPAN&gt;&lt;SPAN class=""&gt;Wait until I get the JMP Pro 18.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks Experts!&lt;/P&gt;</description>
      <pubDate>Sun, 19 May 2024 00:37:08 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Converting-Torch-Addin-files-to-a-model/m-p/755860#M93796</guid>
      <dc:creator>lala</dc:creator>
      <dc:date>2024-05-19T00:37:08Z</dc:date>
    </item>
  </channel>
</rss>

