Hello @altug_bayram,
1- As each individual tree of the boosted trees model is constructed on the residual (errors) of the previous tree, I would say an estimate is a number to add to the mean of your results (what you try to model/predict) depending on the category/split you are. Tree-based methods give "step profilers" (see screenshot), where you can clearly see how changing the value of one factor may increase suddenly the response depending on where the split is.
2- In a bagging tree-based method (like Random Forest), the final result would be a mean/average result of the individual trees. In a boosting tree algorithm, you would have to sum up all the "estimates" of the different trees (and add the mean result to it) to have the predicted results (depending on the "path" of the split chosen) : "The tree is fit based on the residuals of the previous layers, which allows each layer to correct the fit for bad fitting data from the previous layers. The final prediction for an observation is the sum of the predicted residuals for that observation over all the layers" (source : Boosted Tree - JMP 13 Predictive and Specialized Modeling [Book] (oreilly.com)).
3- Not sure about this one. Either using the residuals of the model, or I would try boostrapping RASE results of the model, in order to build 95% bootstrap intervals for RASE to have a better assessment of the model's (Root Average Squared) Prediction Error, or using the K-folds cross-validation of the "Model screening" platform to have a better estimate of model's RASE.
Hope this first answer will help you,
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)