Hi @MetaLizard62080,
Studentized residuals may be a good way to identify outliers based on an assumed model. See more infos about how the studentized residuals are calculated here : Row Diagnostics (jmp.com)
"Points that fall outside the red limits should be treated as probable outliers. Points that fall outside the green limits but within the red limits should be treated as possible outliers, but with less certainty." As you can see from the definition, there is no definitive certainty about the nature of outliers, as it may depend on the assumed model you are fitting.
Maybe the model you're fitting is too simple/not relevant for all points and measurements collected (it seems these quadratic effects may be relevant) ?
What you could do is to analyze your dataset with model-agnostic outliers method, to check if these two points seem to be "strange", or if it's only a problem of adequate model fitting. Model-agnostic outlier detection methods, like Mahalanobis or Jackknife distances, don't rely on a specified model and just compare distance between points based on variables/factors/features. So an outlier identified by this type of methods indicates that this point looks "strange" and doesn't seem to be part of the factors distributions of the other points.
Outliers Episode 3: Detecting outliers using the Mahalanobis distance (and T2)
Outliers Episode 4: Detecting outliers using jackknife distance
From the limited info provided and the diagnostic only done in regards to a specific model fitting, you shouldn't remove any points if you have not justified you "can" do it, based on statistical properties (outliers, ...) AND domain expertise (erroneous values, typo in the measurement recording, bug/problem in the measurement system, ...). In your example, you can still compare the outcomes of two models, one model with all points, and the other one by "hiding and excluding" the two "strange" points (or simply adding a column for weighing these two outliers points less than other points), and see if the outcomes are very different or not. But you may check before some other options, metrics like Press RMSE/R2, Cook's distances, and checking multiple linear regression assumptions, to check your model is acceptable. Maybe your data could use a Box-Cox Y Transformation ?
Hope this anwer may help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)