Hi @KIrishGH,
Welcome in the Community !
If I understand well the question, you are interested into knowing if you can find significant statistical differences between Treatments. You can do this testing easily with the platform "Fit Y by X", by specifying the Treatments as your X and RFV as your Y variable.
To know which statistical test to use, there are assumptions that needs to be verified :
- Normality assumptions of the response variable (using the distribution platform and visualizing Normal Quantile plot for example),
- Independence of observations/results (this can be assessed with domain expertise, and using control chart on the data may reveal trends/shifts if data are not independent),
- Equal variance between groups (this can be checked with the option in the red triangle "Unequal Variance" in the Fit Y by X platform).
In your case, you have a slight deviation from normal distribution, and other assumptions seem to be respected, so you can use a parametric test assuming equal variance and properly adjusted for multiple comparison (to avoid Type I error), like Tukey-Kramer test :
Indeed, no treatment seems to be statistically significant from the others.
You will have the same conclusion with a non-parametric test adjusted for multiple comparisons like All Pairs Steel Dwass test.
To group some Treatments together, you can use the "Recode" option by selecting the column Treatment, right-click and "Recode". Then you'll be able to regroup some treatments together, and perform the statistical testing with the new groups.
Just for information I did it with the informations you provided (by hiding and excluding cuts 4 and 5 and grouping Treatments 1, 3 and 5 in Group 2, Treatment 6 is Group 3 and the rest is unchanged), and I can't see any differences in the outcomes of the tests :
Datatable is attached with the scripts used.
On the theoritical side, unless you have strong domain expertise explanation/justification to exclude some cuts (problems in the method, abnormal results, ...) and group some Treatments together (same chemical base for some Treatments, or same way of application, ...), I wouldn't try to tweak the results until I found something statistically significant.
"If you torture the data long enough, it will confess to anything" (Ronald H. Coase)
The situation you present does look a lot like p-hacking, tweaking and readjusting the factors just for the sake of a significant p-value. This is not a safe and sound statistical practice and doesn't make sense, as statistical significance may be important, but effect size should also be considered to understand the practical significance of the findings.
Hope this answer will help you,
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)