I checked with a DOE expert whose response is below. My interpretation: The bottom line is that the numbers in the correlation matrix are not quantitatively interpretable. They only suggest where you're likely to find issues in the "real" measures (alias matrix and variance inflation factors). That's because the correlation matrix is based on the design matrix multiplied by it transpose, but the estimation is based on the inverse of that. Only when the matrix is very sparse do the numbers have direct correspondences in the estimation, because the inverse is simple in that case.
The Color Map on Correlations are the correlations arising from the Design Matrix (a Table Property if you select Save X Matrix before making the data table), and is useful to check how non-orthogonal a design may be. To quantify how this correlation manifests itself depends on how the effects are treated - we can take a look in the Design Evaluation.
If the terms are both in the model, the variance of those effect estimates and prediction variance will be increased - to see the difference in effect estimates compared to a perfectly orthogonal design (which may not exist), look under Design Evaluation and examine the Variance Inflation Factors (JMP 10) or Estimation Efficiencies (JMP 11).
If an effect is not estimated, it induces bias on the estimated effects. How the effects are biased can be seen through the Alias Matrix under Design Evaluation.
There is a very nice discussion on this in Chapter 2 of Goos and Jones (2011).
The Color Map on Correlations is great for a visual on the orthogonality of a design (one could even use the correlations as a metric), but both aliasing and estimation efficiency involve the inverse of (transpose(X) * X), which is why one should look to the other diagnostics if trying to quantify it in such a way.