I was hoping that we might have had some other contributions to this discussion, but I guess the forum is quiet due to July 4.
The general implication of autocorrelation is that standard errors tend to be under-estimated which has consequences for statistical inference.
One approach is to use a regression model that uses an autoregressive representation of the errors (i.e. a time-series model for the errors that accounts for the autocorrelation).
Here is a useful discussion:
https://online.stat.psu.edu/stat462/node/189/
The good news is that at the bottom of the article they provide an actual example, including data, the calculations and the results. The worked example is based on the Cochrane-Orcutt procedure.
I was able to work through the procedure using JMP and at each step the results agreed.
You might want to try for yourself with their example. One thing you will need to do is construct a column formula for a simple autoregressive model of the residuals - something you can do if you are familiar with the lag function in JMP.
If you think this procedure might be beneficial, but get stuck with it, let me know and I can make a video recording of the sequence.
-Dave