Subscribe Bookmark RSS Feed

script "Save Coefficients"?

Sorry for the newbie question - I have a heavy programming background but new to JMP/JSL and trying to solve a problem for JMP users in my group.

They created a bivariate spline curve fit that works fine:

Bivariate(
Y( :K_102_2040537_1 ),
X( :Week ),
Fit Spline( 0.6, {Line Color( "Red" ), Line Style( Smooth )} )
)

The challenge is that they need to run this for thousands of items (the Y axis) and save the coefficients. I quickly found in the documentation how to iterate through all the items and generate the reports, but the only way I can find to get the coefficients is to manually select the dropdown list on each report and select "Save Coefficients," which then opens another data table.

How can I get that data table of coefficients by a script without having to manually select it from the report?
Thanks!
7 REPLIES
mpb

Super User

Joined:

Jun 23, 2011

Here are a couple of ways:

First Way
Brilliant! Thanks.

I'll post my final script when completed for edification.
Here's what I came up with - probably could be improved, but needed to get it done quickly and learned just enough JSL to get it done. The production version of this is analyzing thousands of columns rather than the 2 sample columns in this test version. It saves the results of each iteration to a database table. Thanks for the help, and constructive criticism welcome.

mpb

Super User

Joined:

Jun 23, 2011

Very nice. I assume that you and your colleague are comfortable using the same smoothing parameter value (lambda=100) for thousands of bivariate spline fits?
the lambda=100 is a placeholder in the test script - we are using a different value for the production version.
mpb

Super User

Joined:

Jun 23, 2011

Same value for Lambda for all of the thousands of columns? Just curious.
Oh - didn't understand your question before. Thanks for clarifying, because I wouldn't have thought to ask that. I will check with the analyst who owns this project and see how she wants to deal with that, but so far they have been using the same lambda for all 22,000+ items. It seems like a Catch-22, because using the same lambda for all items is not ideal, but it's not practical to specify lambda for 22,000 items. The approach is to cluster the items on the coefficients after running this bivariate analysis, which is why I'm dumping the coefficients to the database.