I'm using the neural platform to calculate a model that makes predictions from some observed data and want to compare the performance of that model to other models (not from JMP). To compare the different models, I want to compare the observed data and the predictions from the different models.
The R2 is not a good measure because it checks for any linear relationship between the observed and predicted values y=ax+b (y=observation) and (x=prediction) and I am only interested in the case where a=1 and b=0.
(- Question: Does it matter is I look at "observed vs. predicted" or "predicted vs. observed"?)
So, I'm looking at RMSE and MAD rather than R2. (Question: Is there another measure that could be used?)
- Now, looking at the JMP manual I find the following definition:
Mean Abs Dev: The average of the absolute values of the differences between the response and the predicted response. (All factors are continuous).
When I interpret "response" as the observed value and "predicted response" as the model prediction this would translate in the the equation sum(|x_i-y_i|)/n where n is the number of observations.
However, looking at sources in the internet, this quantity is called mean absolute error:
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed,
wikipedia
The same here:
Mean Absolute Error (MAE): MAE measures the average magnitude of the errors in a set of predictions, without considering their direction. It’s the average over the test sample of the absolute differences between prediction and actual observation where all individual differences have equal weight.
According to wikipedia the MAD (around a central point) is something slightly different:
m ( X ) {\displaystyle m(X)}
sum(|x_i-m(X)|)/n where m is a "central tendency" (e.g. mean, median).
Question: Is there no generally accepted definition of MAD and MAE, do I misunderstand something...???