cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
binoyjosep
Level I

Non-linear regression with parameters expanded by categories

Hi

    I have a data set which I need to fit using a non-linear model, with four fit parameters, A, B, C and D. The data set (Y, column) needs to be separated using 'Categories' which is a column. So, Y for each unique 'Category' is a unique set of data.

I need to fit all the data and find the parameters A, B, C, D, with the condition that I need a unique C and D for all data sets across all 'Categories' while A, B may vary across 'Categories'.

   For this, I create column for the model and create parameters C, D and for A and B I create parameters with 'Expand into categories, selecting column' checked and I select 'Categories' for the column and I do a non-linear regression fit. I get the optimum C and D across all Categories and different A and B across Categories.

My question is this: How does it find a UNIQUE C and D across all Categories? Does it fit data set corresponding to each data set and find C1, D1, C2, D2 etc for each set and later take the median/mean or something else?

Or does it find the range of C and D values and create fine points within C and D and fit again all data set and go finer and finer till the error is within limits?

I need to know the method, as I want to be sure that I am doing what I need to do.

Thanks

Binoy

10 REPLIES 10

Re: Non-linear regression with parameters expanded by categories

Sorry for the delay. I thought that we were finished!

My previous post shows the formula with a common B and beta and conditional parameters on Category levels. What else do you need? The script in my first reply automates building such a formula. I can't understand what else you might require from a model stand-point.

Are you asking about how the Nonlinear platform solves such a problem (estimate the parameters)? It uses a numerical optimization procedure and a loss function to monitor. It uses the given starting values and then varies them to improve the result of the loss function (minimization). The default loss function is sum of the squared errors, or least squares.  It monitors several measures of change to determine when convergence is achieved and it stops.