What you recently provided is a different analysis from the one in your original post. The new one is more complex. Also you still have not found the "Parameter Estimates" report. I change my strategy to explain.
Run the following codes to get an example data and a Least Squares analysis.
dt = New Table( "sample",
Add Rows( 24 ),
New Column( "X1",Character,Nominal,Set Values( {"A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "B", "B", "B", "B", "B", "B", "B", "B", "B", "B", "B", "B"} ), ),
New Column( "X2",Character,Nominal,Set Values( {"x", "x", "x", "x", "y", "y", "y", "y", "z", "z", "z", "z", "x", "x", "x", "x", "y", "y", "y", "y", "z", "z", "z", "z"} ), ),
New Column( "Y",Numeric,Continuous,Set Values([18.33, 10.21, -2, -14.02, -21.37, -32.28, -36.99, -46.6, -56.95, -66.74, -75.84, -83, -74.18, -64.9, -55.96, -47.03, -36.72, -26.27,-18.1, -11.28, -2.18, 6.84, 16.51, 24.81]), )
);
dt << Fit Model( Y( :Y ), Effects( :X1, :X2 ), Personality( "Standard Least Squares" ), Emphasis( "Effect Screening" ), Run );
And here are "Parameter Estimates" and "Correlation of Estimates", the prediction expression, and LS Means of X1:
The prediction expression gives one some idea about the mathematical expression of the model, but not quite exact. The model looks like this
In which, b0 = -28.98792, b1 = -4.949583, b2 = 0.2941667, b3 = 0.286667, from the Parameter Estimates report.
The LS Mean of A is calculated by letting X1 = 1, X2 = 0, and X3 = 0 in the model. And three values plus the intercept form the x-vector in the following codes. Each element is the partial derivative of your prediction formula (a function of estimates) with respect to the corresponding parameter.
The following code calculates Std Error for LS Mean of A.
x = [1 1 0 0];
s = diag([7.034044 7.034044 9.94764 9.94764]);
R = [1 0 0 0,
0 1 0 0,
0 0 1 -0.5,
0 0 -0.5 1];
sqrt(x * s * R * s * transpose(x));
The values in "s" is from the Std Error column in the "Parameter Estimates". The matrix "R" is from the Correlation of Estimates. The product of "s*R*s" gives the covariance matrix of estimates. The product of "x*s*R*s*transpose(x)" gives the variance estimate of your prediction. The square root of the variance estimate give std error.
All other std errors are calculated similarly by using different x-vectors. E.g for B, x = [1 -1 0 0].
Now I explain the plot. It is supposed to depict estimated LS means and their 95% confidence intervals. In theory, under the model assumption, the ratio of the LS mean estimate and the corresponding Std Error, follows a t-distribution with the degree of freedom that equals to the degree of freedom of the model, which is the total number of observations minus the number of parameters. Therefore, the confidence interval for LS means has the formula that I mentioned.
Now I explain why Std Errors are same. This is due to the characteristics in the data. The data that I am giving here is from a balanced design, or equal number of observations in different levels in another word. Try to delete a couple of rows from one level, and you should see unequal std errors.
Your latest analysis is more complex and involves random effect. But the principle of the explanation still applies.
------------
The post is updated with a correction to x-vector for LS Mean of B.