I have strong opinions about this issue. I want your experiment to succeed.
The continuous factor range should always be set as wide as possible. Such a design will provide the best estimates for the effects of the factor as well as the highest power for hypothesis tests about these effects for a given number of runs. Limiting the factor range to 10-100 or including these levels within the range of 0-500 will compromise both the estimation and the power. You can see this loss for yourself using the information provided in the Design Evaluation outline after you make the design. You could also use the Compare Design platform to see the difference between the two design options.
(By the way, I do not recommend using a factor range with a lower bound of zero. This practice turns the continuous factor into essentially a categorical factor: absent or present. It is better to use a non-zero value for the lower bound so that the factor is always present, but to varying degrees. That consideration is science and engineering, not statistics.)
If your colleagues insist on including these additional levels then another way, in addition to Ian's answer, is to define this factor with the Discrete Numeric choice. This approach will use a principled design algorithm instead of an ad hoc process of manually adding runs. This way will maximize the information available from your design.
You are designing an experiment, not a test. Use the model, not the design, to find the "most interesting" features or levels. The purpose of an experiment is to provide the best data to fit a given model. The purpose of a test is to observe the response at a particular factor level. They are not the same thing.