<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Curve fitting and normalization in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30796#M19538</link>
    <description>&lt;P&gt;You need to read Dr. Donald Wheeler's paper:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Transforming the Data Can Be Fatal to Your Analysis&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The misguided love affair with the normal distribution must come to an end!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 28 Nov 2016 17:44:58 GMT</pubDate>
    <dc:creator>Steven_Moore</dc:creator>
    <dc:date>2016-11-28T17:44:58Z</dc:date>
    <item>
      <title>Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29393#M19407</link>
      <description>&lt;P&gt;I have got a data set which is positively skewed and therefore need to normalize it. I have got a wave form line with age and challenging behaviours present in Individuals with intellectual disabilities. I am totally new to JMP and advanced statistical procedures and this is a big step in requirement of my PhD research. Any step by step guidelines or videos in scale development, curve fitting, normalizing data...Would greatly appreciate any help.&lt;/P&gt;</description>
      <pubDate>Sat, 19 Nov 2016 14:07:34 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29393#M19407</guid>
      <dc:creator>bj</dc:creator>
      <dc:date>2016-11-19T14:07:34Z</dc:date>
    </item>
    <item>
      <title>Re: Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29427#M19424</link>
      <description>&lt;P&gt;I guess the first question I have is why you feel you must 'normalize' the data just because the distribution of a&amp;nbsp;specific variable is skewed? Skewness by itself is not necessarily&amp;nbsp;a reason for transforming the data in some fashion. Usually the need for&amp;nbsp;transformation is driven more by things like the data analysis methods under consideration for some such external criteria. There are many analytical methods that are either robust to distributional assumptions or just flat out don't apply. Perhaps you can share a bit more around the practical goals of your study and the methods by which you are considering analyzing the data and some of us may be able to offer more guidance?&lt;/P&gt;</description>
      <pubDate>Mon, 21 Nov 2016 16:51:56 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29427#M19424</guid>
      <dc:creator>Peter_Bartell</dc:creator>
      <dc:date>2016-11-21T16:51:56Z</dc:date>
    </item>
    <item>
      <title>Re: Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29436#M19430</link>
      <description>&lt;P&gt;Thank you for the response. The reason as to why I want to normalize the data is, I want to be able to generate norms&amp;nbsp;for the scale that I have developed and to establlish psychometric properties. I have been reading this article &lt;A title="An Analytical Approach to Generating Norms forSkewed Normative Distributions" href="https://ia801404.us.archive.org/27/items/ERIC_ED353271/ERIC_ED353271.pdf" target="_self"&gt;https://ia801404.us.archive.org/27/items/ERIC_ED353271/ERIC_ED353271.pdf&lt;/A&gt;, which is similar to what I want to do.&lt;/P&gt;&lt;P&gt;I need help to execute this curve fitting in JMP. Thanks again.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 21 Nov 2016 18:22:32 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29436#M19430</guid>
      <dc:creator>bj</dc:creator>
      <dc:date>2016-11-21T18:22:32Z</dc:date>
    </item>
    <item>
      <title>Re: Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29443#M19434</link>
      <description>&lt;P&gt;It appears from the article that the author applied one of the family of Johnson transformations to the variables of interest. Within JMP there's a couple workflows to do something similar with Johnson transformations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Workflow 1: Create a usual Distribution platform report using the untransformed original units&amp;nbsp;variable. From the Distribution platform report, from the JMP hot spot above the frequency distribution graphic, choose the Continuous Fit drop down option, then the desired Johnson transformation. You can save the transformed variable to the data table from the Johnson Transform graphic&amp;nbsp;hot spot by selecting Save Transformed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Workflow 2: I think you can use only if you are running JMP version 12 or higher is, from the JMP data table select the specific column you want to transform, then from the column header, right click, New Formula Column -&amp;gt; Distributional -&amp;gt; Johnson Normalizing. This workflow creates a new column in the data table corresponding to the Johnson Sb transform.&lt;/P&gt;</description>
      <pubDate>Mon, 21 Nov 2016 19:05:28 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29443#M19434</guid>
      <dc:creator>Peter_Bartell</dc:creator>
      <dc:date>2016-11-21T19:05:28Z</dc:date>
    </item>
    <item>
      <title>Re: Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29462#M19444</link>
      <description>&lt;P&gt;Thank you again for the guidance. The real problem is- for example- The subscale has 9 items in four point rating (0-3), so the&amp;nbsp;total score is 27. In my frequency distribution with the sample of 207, range (0-27), I have raw score values namely-15, 18, 19, 20 etc., not present. When I run JSb what will be the corresponding z values for the raw score values not present in my data set. How will I be able to generate norms in that case? Also, I noticed that the scaled scores were widely spread, in the sense that raw scores of 0 had scaled score 10 ( z converted into scaled score with mean of 15 and sd of 3, which is a reference&amp;nbsp;index for challenging behaviours and intellectual disabilities) and the remaining scores (1-27) excluding the scores not present fell into scaled score of 17. I also tried with T scores, it is the same observation. The authors of similar work talk about adjusting means and SDs, smooth curve with age groups, generate moments and input into JohnsonSb, to conversion table. But I do not see a detailed procedural explanation on that.&amp;nbsp;I am not getting it as to what frequency distribution goes into JSb with given moments? JSb allows user defined parameters... Where do these parameters come from and applied on what?. I need an understanding with the application of JSb. Appreciate any help. Thanks.&lt;/P&gt;</description>
      <pubDate>Tue, 22 Nov 2016 06:45:49 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/29462#M19444</guid>
      <dc:creator>bj</dc:creator>
      <dc:date>2016-11-22T06:45:49Z</dc:date>
    </item>
    <item>
      <title>Re: Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30787#M19533</link>
      <description>&lt;P&gt;From my previous post, after couple of weeks of mind boggling and extensive research, I have come up with these steps -including a section of my chapter writing. Can somebody please peer review and validate, if this is okay? Thanks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Norms Development&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Development of the subscales and Challenging Behavior Composite (CBC) norms was done in several stages. The procedures were adopted from the works of Jing-Jen Wang (1992)&lt;A href="#_ftn1" target="_blank"&gt;[1]&lt;/A&gt; and Sparrow, Cicchetti &amp;amp; Balla (2005)&lt;A href="#_ftn2" target="_blank"&gt;[2]&lt;/A&gt;. The ages (4-58) were divided into 12 age groups and the corresponding mean, standard deviation, skewness and kurtosis of the raw scores distribution were computed for the five subscales in each of the age groups. Line graphs of the 12 means against age were drawn separately for each subscale, and a smooth line was traced through the mean. In the same manner, standard deviations across 12 age groups were plotted and smoothed.&lt;/P&gt;&lt;P&gt;The smoothed means and standard deviations, and the unsmoothed skewness and kurtosis values&amp;nbsp;of each age group&amp;nbsp;for each subscale were further input into several stages of standardization. The raw scores were transformed into standard scores, which in turn were generated into another distribution based on the smoothed mean and standard deviation. The distribution (with smoothed mean and standard deviation) was input to generate Johnson bounded distribution&lt;A href="#_ftn3" target="_blank"&gt;[3]&lt;/A&gt;, which in turn were converted into Challenging Behaviour Rating Scale (CBRS) scores&lt;A href="#_ftn4" target="_blank"&gt;[4]&lt;/A&gt; (Mean=15, SD=3). To obtain norms beyond range, linear regression with raw scores and CBRS scale scores was performed. The predicted values were used as CBRS scaled scores.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In the next step, the conversion for the 12 age groups was expanded to include 55 age groups (4-58). Linear interpolation (for missing values between adjacent age groups) was used to fill the gaps. ( I tried extrapolation for missing values beyong age groups- results were not meaningful, so didnt do that)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="#_ftnref1" target="_blank"&gt;[1]&lt;/A&gt; Jiing-Jen Wang. (1992). An Analytical Approach to Generating Norms for Skewed Normative Distributions. Paper presented at the Annual Meeting of the National Council on Measurement in Education, San Francisco, California.&lt;/P&gt;&lt;P&gt;&lt;A href="#_ftnref2" target="_blank"&gt;[2]&lt;/A&gt; Sparrow, S.S., Cicchetti, V.D., &amp;amp; Balla, A.D. (2005). Vineland-II Survey Forms Manual. NCS Pearson Inc. United Sates of America&lt;/P&gt;&lt;P&gt;&lt;A href="#_ftnref3" target="_blank"&gt;[3]&lt;/A&gt; Johnson Curves are fitted using moments (γ, δ, θ, σ) generated with values of mean, standard deviation, skewness and kurtosis of the distribution&lt;/P&gt;&lt;P&gt;&lt;A href="#_ftnref4" target="_blank"&gt;[4]&lt;/A&gt; Vineland-II uses a scale score with mean (15) and standard deviation (3) for its maladaptive behavior scale. It is considered to be a relative measure to the scaled scores of many other tests.&lt;/P&gt;</description>
      <pubDate>Mon, 28 Nov 2016 15:20:45 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30787#M19533</guid>
      <dc:creator>bj</dc:creator>
      <dc:date>2016-11-28T15:20:45Z</dc:date>
    </item>
    <item>
      <title>Re: Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30796#M19538</link>
      <description>&lt;P&gt;You need to read Dr. Donald Wheeler's paper:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Transforming the Data Can Be Fatal to Your Analysis&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The misguided love affair with the normal distribution must come to an end!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 28 Nov 2016 17:44:58 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30796#M19538</guid>
      <dc:creator>Steven_Moore</dc:creator>
      <dc:date>2016-11-28T17:44:58Z</dc:date>
    </item>
    <item>
      <title>Re: Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30815#M19550</link>
      <description>&lt;P&gt;Yes, Thank you. I surely will. But I also think, I do not have enough knowledge to contest this viewpoint and also accept the fact that it can be a debate like- wholism Vs reductionism, quantiative Vs qualitative, type I error Vs type II error&amp;nbsp;etc. The only defense I can make up for my work is that the scale will be more useful at system level- to classify those who need intensive therapy, those of whose parents or teachers can be educated on behaviour modification and therefore avert potential problems and never the less, them who need positive behavioural suppot. Its more like&amp;nbsp;system based provisions (response to intervention model)&amp;nbsp;rather than individual approach. A child walking into a therapy clinic will still be worked upon criterion referenced.&lt;/P&gt;&lt;P&gt;Thank you for the suggestion.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Nov 2016 08:02:24 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30815#M19550</guid>
      <dc:creator>bj</dc:creator>
      <dc:date>2016-11-29T08:02:24Z</dc:date>
    </item>
    <item>
      <title>Re: Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30851#M19573</link>
      <description>&lt;P&gt;bj,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Based on your latest input, perhaps you need to analyse your data with some predictive modelling technique such as neural networking or partitioning.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Nov 2016 20:19:51 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30851#M19573</guid>
      <dc:creator>Steven_Moore</dc:creator>
      <dc:date>2016-11-29T20:19:51Z</dc:date>
    </item>
    <item>
      <title>Re: Curve fitting and normalization</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30871#M19580</link>
      <description>Thank you. Will explore that.&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Nov 2016 02:21:49 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/30871#M19580</guid>
      <dc:creator>bj</dc:creator>
      <dc:date>2016-11-30T02:21:49Z</dc:date>
    </item>
    <item>
      <title>Kernel Smoother</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/31147#M19716</link>
      <description>&lt;DIV&gt;Hello Again,&lt;/DIV&gt;&lt;DIV&gt;After Johnson curve fitting, I transformed the scores to a scale&amp;nbsp;(m=15, sd=3), which is a bench mark for these kinds of tests. I have sent a section of my data here. The first col -raw score (0-27), the second col-scale score&amp;nbsp;(1-24) and the third is, I fitted the second to kernel density estimation in excel, and got the third col, which is adjusted scaled score. The problem is, this adjusted scaled score should have mean 15, SD 3. &amp;nbsp;I do not know how to use kernel smoother in JMP.&amp;nbsp;I have got five subscales with three age group norms, and am progressing halfway through. So far none of them fitted to mean-15, SD-3. it ranges from 15-19 but SD is ranging between 3 or 5. I am time pressed and have to figure out a way. I do not know if am right, I tried to use the third col.. into random number generation in excel with maximum value 24.. calculated m and sd of them to spot the right one. But its exhaustive to do that. I am not confident to explore non parametric statistics here but there has been evidence for kernel smoother to fit post norms smoothing. Would really appreciate any advice. Thanks.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Sun, 04 Dec 2016 07:20:01 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/31147#M19716</guid>
      <dc:creator>bj</dc:creator>
      <dc:date>2016-12-04T07:20:01Z</dc:date>
    </item>
    <item>
      <title>Re: Kernel Smoother</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/31150#M19719</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;
&lt;DIV&gt;&amp;nbsp;After Johnson curve fitting, I transformed the scores to a scale&amp;nbsp;(m=15, sd=3), which is a bench mark for these kinds of tests. I have sent a section of my data here. The first col -raw score (0-27), the second col-scale score&amp;nbsp;(1-24) and the third is, I fitted the second to kernel density estimation in excel, and got the third col, which is adjusted scaled score. The problem is, this adjusted scaled score should have mean 15, SD 3.&lt;/DIV&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;You transformed the scale to {m=15, sd=3}. JMP would try to center (m=0) and scale (sd=0) because that is the usual practice. For example, predictors centered and scaled this way meet regression assumptions better. It is not designed for some arbitrary transformaton. You can always multiply by a constant to achieve the sd that you want. You can&amp;nbsp;always add a constant&amp;nbsp;to achieve the mean that you want. Scale before you translate.&lt;/P&gt;</description>
      <pubDate>Sun, 04 Dec 2016 21:00:15 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/31150#M19719</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2016-12-04T21:00:15Z</dc:date>
    </item>
    <item>
      <title>Re: Kernel Smoother</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/31151#M19720</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;HR /&gt;I do not know how to use kernel smoother in JMP.&lt;/BLOCKQUOTE&gt;
&lt;P&gt;What part of the JMP Help or guides (&lt;STRONG&gt;Help&lt;/STRONG&gt; &amp;gt; &lt;STRONG&gt;Books&lt;/STRONG&gt; &amp;gt; &lt;STRONG&gt;Basic Analysis&lt;/STRONG&gt;: &lt;STRONG&gt;Distribution&lt;/STRONG&gt;) don't you understand? Glad to help here.&lt;/P&gt;</description>
      <pubDate>Sun, 04 Dec 2016 21:02:40 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/31151#M19720</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2016-12-04T21:02:40Z</dc:date>
    </item>
    <item>
      <title>Re: Kernel Smoother</title>
      <link>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/33152#M19771</link>
      <description>&lt;P&gt;Thank you for the mail. Let me read "help-books" and explore the data with kernel smoother. I lack understanding of the bandwidth and what it does to distribution. Thanks much&lt;/P&gt;</description>
      <pubDate>Wed, 07 Dec 2016 04:28:09 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Curve-fitting-and-normalization/m-p/33152#M19771</guid>
      <dc:creator>bj</dc:creator>
      <dc:date>2016-12-07T04:28:09Z</dc:date>
    </item>
  </channel>
</rss>

