Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results for
Search instead for
Did you mean:
Our World Statistics Day conversations have been a great reminder of how much statistics can inform our lives. Do you have an example of how statistics has made a difference in your life? Share your story with the Community!
MLE of Laplacian (JMP Neural Network Robust option)
Nov 26, 2018 6:28 PM(1380 views)
There is a document describing JMP's implementation of the robust option for the neural network platform JMP ANN that notes it maximizes the liklihood of a Laplacian distribution, which [of course] is the same as minimizing the absolute deviations. However, there is no analytic gradient in this case (the function is not differentiable), so how does JMP actually run the optimization in this case? Do they use a smooth approximation to the absolute value function and then apply a quasi-newton method? Or do they calculate a numeric gradient and still apply a quasi-newton method, but just be especially careful around zero?