This is an excellent question! A cosine similarity coefficient will be identical to a correlation coefficient when the vectors considered are centered (i.e., have a mean of zero). Traditionally, in the information retrieval field, a data term matrix (DTM) is created and a singular value decomposition (SVD) is done directly on it, without centering or standardizing (i.e., centering and scaling) the vectors. Analogous to this, the cosine similarity is a measure of association with non-centered vectors.
However, there are important advantages to centering the DTM. Specifically, the first singular vector from the SVD (your first dimension or topic) won't necessarily be the most important singular vector in the multidimensional space if centering hasn't been done. This is one reason why JMP allows users to center and/or standardize the DTM for use in Latent Semantic Analysis or Topic Analysis. Similarly, correlation coefficients are obtained from the centered and scaled (i.e., standardized) DTM.
In sum, you can easily obtain a measure of vector similarity by saving the DTM to your data table and going to Analyze > Multivariate Methods > Multivariate, adding all your DTM columns into the Y role and clicking OK. The resulting correlation matrix will point to the degree of similarity of the vectors. That is, values of 1 indicate two identical vectors in the same direction, whereas values of -1 indicate two identical vectors pointing in opposite directions, and so on. Finally, you'll have to transpose your data prior to following the steps above if you want to find the similarity between documents instead of terms.
HTH,
~Laura
P.S. If you haven't upgraded to JMP 13.2 I strongly suggest you do so. Please take a look at my post here so you can learn about the improvements to Text Explorer in 13.2.
Laura C-S