Hi all,
I have discovered a couple more interesting things, so I wanted to take a moment to update this thread in case it will help anyone in the future:
1) It turns out that when right-clicking to create the table and then selecting design evaluation from the DOE menu, somehow the higher order model terms are dropped. This is why the average prediction variance (APV) drops considerably to ~0.2. So obviously this figure is not correct. It is inefficient to have to add all the terms back, so this is probably not the best approach.
2) If I create the custom model and save it, then round all the values and save it, and use the "Compare Designs" feature under DOE < Design Diagnostics, then the two models are virtually identical in every respect, including APV, which in this case is about ~0.5. Of course they aren't exactly the same, but certainly the APV of the rounded model is _not_ 7 times larger. So, this appears to be the only reasonable way of comparing the APV of two models.
Ultimately, both of you are correct that sticking with "continuous" for the factors and rounding doesn't make a huge difference, but it isn't necessarily obvious given the APV weirdness unless the Compare Designs tool is used.
Anyway, thank you for your help. I doubt that I would have figured this out without all of the feedback.