Multiple comparisons is a term I am familiar with, though I never do such adjustments and I'm not aware of built in capabilities for that in JMP (I certainly could be wrong about that and someone will correct me if I am). The most familiar correction is the Bonferroni correction which I'm not aware of in JMP. But it is easy enough to do by hand I think. However, I don't think it is a good idea to lower p values to account for the multiple comparisons - it does nothing to correct for dichotomous thinking that p values invite. How large is the effect size, what is the confidence interval, how many model assumptions have been made, etc. are all critical for interpreting the results of a study. When you just lower the p value, you just continue thinking that the result of a study is whether or not an effect is real, rather than the potential sizes and uncertainties about those effects.
To paraphrase Tufte's comment on pie charts (the only thing worse than one pie chart is more than one pie chart), the only thing worse than one p value is more than one p value.
Perhaps I've overstated things. But the real issue you are referring to (I believe) is subgroup analysis. Usually in RCTs, subgroup analysis is frowned upon unless it is part of the pre-registered study - in which case sampling issues have been thought of to begin with. I think a less mechanistic approach than Bonferroni is best - if you are thinking of p=.05 as denoting a strong enough effect to "matter" then if you do 20 subgroup analyses you should expect one of these to mislead you on average. In reality, you can probably expect more than one in 20 to mislead you.