First, welcome to the community.
I'll share my thoughts with perhaps, a different perspective:
1. How data should be analyzed and what tools you use for analysis is a function of how the data was acquired. (actually this extends to what questions you can answer, what conclusions you can draw and your confidence in extrapolating the results is a function of how the data was acquired)
2. While the appropriate "designation" of blocks can be debated, if you can specifically assign what factors are confounded with the blocks and as Mark suggests, those "levels" can be reproduced, those blocks can be treated as fixed effects. I would debate whether taking those randomly (or treating them as random effects) increases the inference space vs. specifically manipulating them at "Bold levels". I personally think you have a greater likelihood of have a broader inference space by specifically manipulating the blocks.
3. Treating the blocks as a fixed effect has the huge advantage of being able to quantify block-by-factor interactions (essentially noise-by-factor interactions) which is perhaps the best method to quantify the robustness of your design structure. Are the effects of your design factors consistent over changing noise?
4. Interestingly, blocking has the additional benefit of simultaneously increasing the inference space and increasing the precision of the design (as defined by Box). This because the within block is biased to the factor effects (the noise is held constant within the block) and then the noise is changed between the blocks so it can be assigned.
"Block what you can, randomize what you cannot" G.E.P. Box
Block what you can identify and manage over the execution of the experiment, randomize for the noise that has not been identified or cannot be managed over the duration of the experiment.
"All models are wrong, some are useful" G.E.P. Box