<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: JMP Pro &amp;gt; Dimension Reduction &amp;gt; PCA &amp;gt; Retrieve Most Meaningful Components? in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/JMP-Pro-gt-Dimension-Reduction-gt-PCA-gt-Retrieve-Most/m-p/719459#M90201</link>
    <description>&lt;P&gt;Your query is quite typical for practitioners. &amp;nbsp;Pardon my over simplification.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The interpretation of Eigenvectors is not easily understood by the practitioner (by this I mean scientist or engineer). &amp;nbsp;The eigenvectors are &lt;EM&gt;likely&lt;/EM&gt; some combination (and subset) of the biomarkers in your study. &amp;nbsp;Since eigenvectors are looking at the data through a different "dimension", that dimension may be non-sensical our have no intrinsic meaning from a practical standpoint. &amp;nbsp;Eigenvectors don't have a familiar "name". &amp;nbsp;Hopefully, what PCA will do is to identify the need and provide motivation for further investigation. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;How did you get your data? &amp;nbsp;Are the variations in any of the biomarkers biased (e.g., do some vary more than others)? &amp;nbsp;If you already know some of the biomarkers are collinear (or correlated or redundant), can you use this knowledge to reduce the number of biomarkers before doing PCA?&lt;/P&gt;
&lt;P&gt;Now, some folks don't really care to understand (what are these eigenvectors and how do they relate to the variables in my raw data?) and just want a model that "works" (&lt;EM&gt;perhaps&lt;/EM&gt; like neural networks).&lt;/P&gt;</description>
    <pubDate>Mon, 29 Jan 2024 18:00:45 GMT</pubDate>
    <dc:creator>statman</dc:creator>
    <dc:date>2024-01-29T18:00:45Z</dc:date>
    <item>
      <title>JMP Pro &gt; Dimension Reduction &gt; PCA &gt; Retrieve Most Meaningful Components?</title>
      <link>https://community.jmp.com/t5/Discussions/JMP-Pro-gt-Dimension-Reduction-gt-PCA-gt-Retrieve-Most/m-p/718581#M90173</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;I am working on a relatively large dataset that includes ~1,500 variables (i.e., biomarkers). Since subsets of these variables tend to correlate with each other, I wish to reduce the dimensions of the dataset. I am currently exploring PCA to achieve this goal. Still, I am unsure how to link back specific principal components with the original data (i.e., which biomarkers/variables contribute the "most" to the specific principal components). I know how to retrieve the Formatted Loading Matrix, but how can I set a meaningful cutoff for selecting the most influential components of the principal components?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Of note, the specific principal components of interest are derived from a simple linear model testing the association of each PC with a clinical variable.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Windows Pro&lt;/P&gt;
&lt;P&gt;JMP Pro 16.1&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for your help.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Best,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TS&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jan 2024 21:19:33 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/JMP-Pro-gt-Dimension-Reduction-gt-PCA-gt-Retrieve-Most/m-p/718581#M90173</guid>
      <dc:creator>Thierry_S</dc:creator>
      <dc:date>2024-01-28T21:19:33Z</dc:date>
    </item>
    <item>
      <title>Re: JMP Pro &gt; Dimension Reduction &gt; PCA &gt; Retrieve Most Meaningful Components?</title>
      <link>https://community.jmp.com/t5/Discussions/JMP-Pro-gt-Dimension-Reduction-gt-PCA-gt-Retrieve-Most/m-p/718602#M90187</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/11634"&gt;@Thierry_S&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are perhaps some options interesting to consider for your use case :&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://www.jmp.com/support/help/en/17.2/index.shtml#page/jmp/principal-components-report-options.shtml" target="_self"&gt;Partial contribution of variables&lt;/A&gt; : In the red triangle of the PCA platform, you can select "Partial Contribution of Variables", which offers an interesting view on which variables are the most influent in which principal components, and the table provided before the plot is helpful to estimate these relative influences :&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Victor_G_0-1706519272724.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/60661iCE2B8B2F76948239/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Victor_G_0-1706519272724.png" alt="Victor_G_0-1706519272724.png" /&gt;&lt;/span&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.jmp.com/support/help/en/17.2/index.shtml#page/jmp/principal-components-report-options.shtml" target="_self"&gt;Cluster Variables&lt;/A&gt; : Helps define non-overlapping clusters of variables and may help you select the most interesting/important contributing variables to the different clusters.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.jmp.com/support/help/en/17.2/index.shtml#page/jmp/principal-components-report-options.shtml" target="_self"&gt;Scree plot&lt;/A&gt; : Finally, if you want to estimate an appropriate number of principal components to use (good compromise between accuracy/variability explanation and complexity/number of PCs), the Scree plot is a very nice option, showing the eigenvalues depending on the principal components order :&amp;nbsp;&lt;BR /&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Victor_G_1-1706520197527.png" style="width: 400px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/60662i3ED9BCD8CF654BEA/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Victor_G_1-1706520197527.png" alt="Victor_G_1-1706520197527.png" /&gt;&lt;/span&gt;
&lt;P&gt;Common approach in litterature recommends selecting only a number of PCs which provide eigenvalue &amp;gt;1 and not beyond the "elbow" region (as for each additional factor, the eigenvalue doesn't change a lot, meaning the contribution of each additional variable is not very high).&lt;/P&gt;
"Those with eigenvalues less than 1.0 are not considered to be stable. They account for less variability than does a single variable and are not retained in the analysis. In this sense, you end up with fewer factors than original number of variables. (Girden, 2001)" :&amp;nbsp;&lt;A href="https://stats.stackexchange.com/questions/72439/why-eigenvalues-are-greater-than-1-in-factor-analysis" target="_blank"&gt;https://stats.stackexchange.com/questions/72439/why-eigenvalues-are-greater-than-1-in-factor-analysis&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;I hope these first options will help you,&lt;/P&gt;
&lt;P&gt;Best,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jan 2024 09:36:00 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/JMP-Pro-gt-Dimension-Reduction-gt-PCA-gt-Retrieve-Most/m-p/718602#M90187</guid>
      <dc:creator>Victor_G</dc:creator>
      <dc:date>2024-01-29T09:36:00Z</dc:date>
    </item>
    <item>
      <title>Re: JMP Pro &gt; Dimension Reduction &gt; PCA &gt; Retrieve Most Meaningful Components?</title>
      <link>https://community.jmp.com/t5/Discussions/JMP-Pro-gt-Dimension-Reduction-gt-PCA-gt-Retrieve-Most/m-p/719459#M90201</link>
      <description>&lt;P&gt;Your query is quite typical for practitioners. &amp;nbsp;Pardon my over simplification.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The interpretation of Eigenvectors is not easily understood by the practitioner (by this I mean scientist or engineer). &amp;nbsp;The eigenvectors are &lt;EM&gt;likely&lt;/EM&gt; some combination (and subset) of the biomarkers in your study. &amp;nbsp;Since eigenvectors are looking at the data through a different "dimension", that dimension may be non-sensical our have no intrinsic meaning from a practical standpoint. &amp;nbsp;Eigenvectors don't have a familiar "name". &amp;nbsp;Hopefully, what PCA will do is to identify the need and provide motivation for further investigation. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;How did you get your data? &amp;nbsp;Are the variations in any of the biomarkers biased (e.g., do some vary more than others)? &amp;nbsp;If you already know some of the biomarkers are collinear (or correlated or redundant), can you use this knowledge to reduce the number of biomarkers before doing PCA?&lt;/P&gt;
&lt;P&gt;Now, some folks don't really care to understand (what are these eigenvectors and how do they relate to the variables in my raw data?) and just want a model that "works" (&lt;EM&gt;perhaps&lt;/EM&gt; like neural networks).&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jan 2024 18:00:45 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/JMP-Pro-gt-Dimension-Reduction-gt-PCA-gt-Retrieve-Most/m-p/719459#M90201</guid>
      <dc:creator>statman</dc:creator>
      <dc:date>2024-01-29T18:00:45Z</dc:date>
    </item>
  </channel>
</rss>

