Hi @lala,
What is the question exactly ? These two algorithms are tree-based models, but Random Forest (also called Bootstrap Forest in JMP Pro) is a "bagging" (bootstrap aggregating) model type and LightGBM a boosting model type.
That means Random Forest consists of several trees that are trained in parallel on various datasets. These datasets are similar but different, and created using the bootstrap method (sampling with replacement of the original dataset to create the bootstrap datasets). This operation enable to have similar but different datasets, which help the tree to learn slight different patterns. Once these trees are trained, the final prediction result is the average of the individual tree predictions (for regression) or the most likely class from the trees (for classification).

Image from https://www.geeksforgeeks.org/machine-learning/bagging-vs-boosting-in-machine-learning/
Random Forest have also a random feature selection process at each split which makes it very good to handle multicollinearity, and allow the creation of more diverse trees (while reducing the risk of overfitting).
On the opposite, Boosted trees are trained sequentially : you start with a simple tree, then evaluate the residual from this model, and you will in the second model use these residuals to train the second tree where the residuals are the highest, in order to improve the prediction performances of the ensemble of trees. So each tree try to "fix" the prediction issues from the previous tree. At the end of the tree "chain", you obtain your prediction result (class or value depending on the task):

Image from https://www.geeksforgeeks.org/machine-learning/bagging-vs-boosting-in-machine-learning/
So the training mode (parallel vs. sequential) is the biggest difference between the two algorithms you mention:

The particularity of LightGBM compared to other boosting tree-based model is the method to grow the tree, which is a leaf-wise growth, compared to XGBoost or other boosting models who will grow the tree level-wise : that means you can't develop a node further if the tree is not "balanced" :

The choice of these models depends on the task and on the bias/variance tradeoff : Bagging algorithms like Random Forest reduce variance (overfitting), while boosting models like Boosted Trees reduce bias.
Hope this answer may help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)