Hi @Dani : You raise some good issues. I'll try to respond in very general terms so there may be cases where what I write below doesn't hold. That said, in general (and in no particular order).
(1). If the value of the intercept (Y[0] ) is important to you, then for a quick and easy way to see it, then yes avoid using the centered polynomial (as you say). However, be very careful here. For example: say the range of X for your data is from -40 to -10. Then predicting Y[0] is extrapolating (predicting outside the range of data that the model was built on). This can be very problematic for polynomials. So, only do this if you have some scientific understanding that the model holds beyond your data. Now, if your data is -49 to -1, then Y[0] is still an extrapolation but the extrapolation is less so it may not be as problematic.
(2). In polynomial regression, the only p-value that matters is the one associated wit the highest order term; the others are just along for the ride. If the highest order term (the squared term in your example) is "significant" you are done. If not, remove that term and refit the model.
(3). That said, all the p-values are testing whether of not that parameter equals zero or not. For the centered model, the "intercept" and X parameter don't have a interpretation that is easily intuited. One thing you will notice though, is that when X is at its mean (-25), then the mean of Y is -125875 - 10,153*[-25] = 127950. And, notice the the p-values for the squared term are the same for both models because they are testing the same thing (X^2 coefficient = 0, or not).
(4). For both models, and from a model matrix point of view, the intercepts are there to ensure a least squares solution. And, as I say in 1, the intercept is easily interpreted for the uncentered model. In the centered model, however, it doesn't have an easy interpretation; but, the "intercept" is necessary for a least squares solution so I'd just smile and let it go along for the ride.