Just the other day, I was working on some performance optimizations for a bit of code and remarked to a former colleague: “I’m worried if we do X, it will be too slow.” That colleague, none other than Rich Hickey, quipped back: “why worry when you can measure?” This was a potent bit of advice. And he was right, the two alternatives I was considering were small enough it was reasonable to implement and measure both.
After measuring the two alternatives more deeply I was left with something rather profound–a reduction of risk. Recall risk is the product of a negative outcome’s likelihood and impact. By measuring and spending time with each solution, I came away with a broader understanding of their performance characteristics and implementations, narrowing the likelihood I fail to mitigate the risk.
You should seek to emulate this careful approach in your own work. Rather than thrash on a problem or reject a solution out-right, objectivize your feelings with a measure of each alternative.
By varying what it means to measure you can consider alternatives in all sorts of shapes and sizes. In my example, to measure meant to measure the average running time of two alternative implementations. This is probably the most concrete of measures. This works for many comparisons, but your own measures need not be so concrete. You could measure the quality of expressing code in one way against another, or compare alternative technologies by creating a matrix of their attributes.
Whatever measure you choose, remember it does not need to be exhaustive and 100% accurate, it just needs to be enough for the task at hand.