I assume that for multiple linear regression model in JMP, Lack of Fit's p value test (Prob > F) value signifies whether the whole model is significant or not right? Is it the case that if the value is less than 0.05 then we consider the model as statistically significant? What is the use of Analysis of Variance then or is there anything helpful for it's (Prob > F) value for the result findings? For a model, if we don't see if the value is not statistically significant (value is greater than 0.05) then should we assume the model can not explain the relationship at all between dependent and independent variables? Thanks in advance.
Below I'll address each of your questions one at a time, but I think you might get even more value out of an overview of multiple regression in general. There is a module in the Statistical Thinking for Industrial Problem Solving online course on just that -- completely free and online. You can sign up and get instant access at: https://www.jmp.com/statisticalthinking.
1. Lack of Fit Test: The Lack of Fit Test in JMP is, in general terms, a test of whether your model is underspecified. That is, whether or not your model is missing important terms/predictors. Rejecting the null hypothesis is evidence that you need to fit a more complex model to adequately account for the data (the Lack of Fit Test does this in an ingenious way, but that's beyond the scope of this post. You can read more about the Lack of Fit Test in the JMP Documentation). This test is not the typical test of statistical significance for the model -- it is not a test for whether you have evidence that at least one coefficient is non-zero.
2. Analysis of Variance Table: The ANOVA table displays overall model statistics, including the sums of squares and mean-squares for both error, as well as for the model overall. The p-value in this table is what you would call an omnibus test, a test of whether the model, as a whole, explains more variation in Y than would be expected by chance for a model of this complexity (i.e. with as many degrees of freedom used for prediction). Rejecting the null here (a p-value less than your alpha, for instance, 0.05), is evidence that our model does explain more variation in Y than we would expect by chance -- or in simpler terms, it provides statistical evidence that one or more of our predictors have a true relationship with Y in the population. I find the Parameter Estimates table more informative, as it displays the coefficients for each predictor and their associated p-values, testing whether there is evidence that predictor has a non-zero parameter in the population.
3. Failing to reject a null: A p-value greater than our stated alpha value (e.g. p > 0.05) leads us to not reject the null hypothesis, to not be able to state that a result is statistically significant. This does not mean the model explains no variation in the Y variable, simply that the amount of variation in Y that the model explains could be reasonably attributable to chance; that we do not have enough evidence yet to credibly reject chance as an explanation for the relationships we observe between Y and the predictors. One more thing is worth noting here, and you'll have to forgive me for getting on my soapbox about this one: although we can regard rejecting the null hypothesis (e.g. p < 0.05) as evidence towards an effect, we cannot regard failing to reject the null hypothesis as evidence that there is no effect. There are both statistical and epistemic issues at play, and they're all beyond the scope of this post, but it's worth remembering that we never accept the null hypothesis in a frequentist hypothesis testing framework.
I hope this clarifies a few things, and I encourage you to look into the Statistical Thinking for Industrial Problem Solving online course.