There are several well-known methods to test for equal variances Barlett's, Levene's, Cochran's and a classic, Hartley's maximum F test.
Hartley's maximum F test, to control alpha, performs an F- test on the largest variance of the 3 (k) groups compared to the smallest varaince of the groups. If this F-test is not significant then the groups are considered to have equivalent variance (std dev).
Note it is recommended to use equal sample size, to make these tests more robust.
It is much easier to compute the power and sample size for Hartley's maximum F. Using a hard-copy table from and old statisical course, for alpha = beta = 0.05, to detect a 50% increase in the std dev, lambda = sigma(max) / sigma(min) =1.5, each group should have 81 samples. To detect lambda = 2.0 (one group's std dev is twice that of another group), each groups should have n=30.
I used to have a JSL script to compute this for Hartley's Maximum F, I'll have to check my archives.
Maybe one of the blog contributors have a link they can share. JMP Oneway reports the Bartlett, Levene, O'Brien and Brown-Forsythe tests for equal variance, however, the DOE > Sample Size and Power Calculator does not include a Two-Sample, Two-sided, test for equal variance calculation.
Another approach is to use repeated simulation of your design, 3 groups, n=<your choice> with one group's std dev lambda times another. For each simulation get the results of the variance tests. Then calculate the number of times each test flags for that unequal variances. Then power = farction of flags, and Beta = 1 - fraction of flags.
Of course, as important as n is the how the data is collected so that the experiment results represent the true variability (sigma) for that group.