In many cases where an algorithm is provably NP-hard, this intractability is a worst-case bound that only applies to pathological inputs. In these cases, by exploiting knowledge of the specific structure of “real-world” inputs, the algorithm can be shown to be much more efficient in the “normal” case. However, when studying a new problem, this can be hard to show if it is not obvious which structural constraints exist, and which ones would lead to increases in efficiency. In this paper, we show how one can describe the underlying problem declaratively as a CSP process and use the FDR refinement checker to explore the complexity space of the problem. By knowing which optimizations FDR uses to find solutions more efficiently, we can determine under which conditions the worst-case intractable algorithm executes efficiently, and incorporate analogous optimizations into the algorithm to exploit these conditions.