Share your ideas for the JMP Scripting Unsession at Discovery Summit by September 17th. We hope to see you there!
Choose Language Hide Translation Bar
Highlighted

## Is it possible to do parameter regularization in the Nonlinear platform?

I have problem (see https://public.jmp.com/packages/Problematic-Nonlinear-Fit/js-p/kz0BFg2_XHtZjz6kNKdXg)  where I am fitting a non-linear model, and because of the data there is some instability in the model fit.  One of the parameters in the non-linear model tends to "slide to infinity" when there are small errors in the data compared to the true theoretical model.  What I would like to do is fit the non-linear model, but put a penalty on that parameter (e.g. L2 norm) so that is prevents the parameter from blowing up.

What I mean by this:

Suppose my non-linear model is

f(t) = A * (1 - exp(-k*t)) + B * (1-exp(0.0193 * t))

normally, the loss function that is minimized to find the parameter estimates is the squared error loss

find(A, k, B) so that sum { (y-f(t))^2) } is minimized

what I want to do is something more like

find(A, k, B) so that sum{ (y-f(t))^2 + lambda*k^2} is minimized

where lambda is a regularization parameter that I would pick on my own based on a simple grid search.

I looked into using a custom loss function column, but I don't know if I can include a model parameter from another column in that column.

Any ideas on how to keep the rate ("k") from blowing up?

2 REPLIES 2
Highlighted

## Re: Is it possible to do parameter regularization in the Nonlinear platform?

I thought about this problem a long time but I cannot see how to surface the current parameter values so that they can be used in the regularized loss function. It might be possible to skip the prediction formula and express the entire model in the loss function. Custom loss functions is a weakness in my knowledge.

There is a built-in feature that might help in lieu of regularization. See Help > JMP Documentation Library > Predictive and Specialized Modeling > Nonlinear Regression > Additional Examples > Example of Setting Parameter Limits. The problem with this solution is that your parameter would probably just end up at the upper limit instead of infinity.

Have you tried some of the non-default settings to help? I wonder if Numeric Derivatives Only would help? Would using the Second Derivative option help?

Learn it once, use it forever!
Highlighted

## Re: Is it possible to do parameter regularization in the Nonlinear platform?

I tried using numerical derivatives, that didn't change the results.  Parameter bounds do what you thought it would, just make the estimate be the specified bound.

I uploaded the example data in this post, also.

I did think of a way to do this, using the prediction profiler.  Here is my approach:

I created a table (see attached) that had columns k, A, B, lambda and column for the "loss" function I want to minimize.  The attached table has 2 loss functions (QuadLoss and L2RegLoss).  Each of those columns has a more complex script as the formula.  I store the profile data as a matrix, extract the y and t values as a vector, and then calculate the difference between y and the model (completing writing out the model as a function of t, k, A, and B).  Then I calculate the sum of squared errors plus any added regularization penalty as the last result.  The value for lambda scales the penalty value.  I "vectorized" the function just to make it a little easier to write.  Here is the function for the QuadLoss:

``````dmat = [0.061033 3, 0.0706 6, 0.0923 12, 0.104767 24, 0.122933 48, 0.1657 120, 0.2201 240];
yvec = dmat[0, 1];
tvec = dmat[0, 2];
rvec = yvec - (:A * (1 - Exp( -:k * tvec )) + :B * (1 - Exp( -0.0193 * tvec )));
(rvec` * rvec);``````

Here is the function for L2RegLoss

``````dmat = [0.061033 3, 0.0706 6, 0.0923 12, 0.104767 24, 0.122933 48, 0.1657 120, 0.2201 240];
yvec = dmat[0, 1];
tvec = dmat[0, 2];
rvec = yvec - (:A * (1 - Exp( -:k * tvec )) + :B * (1 - Exp( -0.0193 * tvec )));
(rvec` * rvec) + :lambda * :k ^ 2; ``````

Then, using the Prediction Profiler and a desirability function to minimize the loss, I looked at what different values of lambda (the regularization tuning parameter) gave for the k value.  In this case I'm only regularizing k, not A and B. The "NoReg" saved setting is just using the quadratic loss (sum of squared error), which should be similar to what nonlinear platform is doing (but not exactly, since in this case I'm using the optimization routine in the profiler, which I'm pretty sure is different that what the non-linear platform uses).  Then I tried values of 0.01, 0.001, 0.1 for lambda, and the estimation of k is pretty strong impacted by the value of the tuning parameter.  So I'm not entirely satisfied by this approach.  The choice of tuning parameter is challenging, not sure if this is a reasonable approach.

Article Labels

There are no labels assigned to this post.