There are a number of papers and tutorials that discuss DSD, for example:
https://www.jmp.com/en_us/whitepapers/jmp/definitive-screening-designs-two-level-categorical.html
https://community.jmp.com/t5/Mastering-JMP/Using-Definitive-Screening-Designs-to-Get-More-Informatio...
Brad Jones used to have a blog on the topic.
The classical designs are orthogonal/balanced and can be very effective if used in a sequential approach. The idea is to start with a bold design space (lots of factors at 2 bold levels) with fractionated designs, then iterate. Each iteration will lead the experimenter to a smaller set of factors at more ideal levels and hopefully towards an optimum design space and a predictive model. In many cases, these designs are additive and easily de-aliased. They can also be used when you are low on the knowledge continuum. They are, perhaps, also easier to teach to non-statisticians. The "draw-back" is they assume linear relationships, at least in the initial designs. This is not a bad assumption as we typically build models based on hierarchy. If the experimenter has a reasonable model to start with, and is suspicious of non-linear effects, then the linear assumption may not be desired. Sequentially, you can augment the classical designs with center points to test for non-linear effects, but the non-linear effect is not specific.
DSD's offer an opportunity to include 3-level factors in the design and therefore estimate quadratic effects. Of course, this means your factors must be quantitative, but there are some options to include categorical factors in a DSD (see referenced paper). They are very efficient assessing relatively large numbers of factors (they are meant to be screening designs after all) with relatively good resolution. They can also detect departures from the linear assumption with some degree of precision.
Selection of the appropriate design is always situation dependent. There are a number of criteria which affect the decision. And, BTW, it is impossible to know what the "right" design is until after you understand the causal structure. My advice is always to design multiple experiments with the factors you have chosen to experiment on. Predict what knowledge could be gained with each option (e.g., what order model, precision of the design, inference space, level-setting, etc.) AND what will you do with that knowledge. Weigh the potential knowledge gained against the resources required. Choose a design and be prepared to iterate.
"All models are wrong, some are useful" G.E.P. Box