Predictor Screening platform is Bootstrap Forest platform with minimal number (just one now: Number Trees) of tuning parameters to specify and only provides Column Contributions report. The magic is that Predictor Screening platform has decided default values for the remaining tuning parameters. The setting serves well for finding useful predictors.
If one wants to apply boosted tree for the same purpose with minimal number of tuning parameters, say one parameter, that is a very difficult task. E.g. boosted tree relies much more heavily than random forest on the choice and usage of a validation method, and the choice of a validation method is itself an art.
Predictor Screening platform serves the purpose of finding useful predictors. It is not a replacement of the Bootstrap Forecast. Your final model won't be a random forest model if you just use Predictor Screening. You will probably decide to use other models using the predictors found by Predictor Screen. A benefit of using Predictor Screening to find useful predictors, rather than using built-in variable selection methods in your final model, is that the method does not rely on any parametric assumptions.
Between random forest and boosted tree, I have not come across any studies that conclude their performance based on the type of the response variable.
Boosted Tree is an implementation of the gradient boosting method. So is XGBoost. And so are other implementations, e.g. LightGBM. Their comparisons can be rather complicated, and probably subjective as well. But it is always exciting to play with different tools to understand their pros and cons.