The Wrong Tool

Did you ever buy something, only to find out that it just wasn’t quite right for you? I don’t mean the usual buyer’s remorse over a large purchase, like a new car. I mean you bought a sports car, and somehow missed the fact that you like to haul your motorcycle to the desert on weekends. Oops!

Not surprisingly, you’ll find people do this frequently with small purchase, for example apps for your phone. You’re hoping for a specific utility, you read a description, it sounds right so you buy it. It might even seem to work OK in simple tests. I had this happen to me recently with a small external microphone I bought for my smartphone to do audio recording. It worked for a couple of minutes, but when I tried to actually use it, the audio was garbled or non-existent for much of the recording. Argh!

Frequently, this is exactly what happens when people decide to buy development tools. They take advice from someone who has used the tool individually, or in a limited environment. When they try to test the tool, perhaps in a pilot program, everything appears fine. Then when deployment begins so do the problems. False positives, configuration problems, poor workflow… the list is seemingly endless and sadly too familiar.

What happens is that the selection process for the tool is inadequate. Most POCs (proof-of-concept) that I see are really simple bake-offs. Someone has an idea in mind of what they think a tool should do and they create the good old checklist of features. Sometimes this is done with the help of a single vendor – a recipe for disaster. Other products are categorized based on the checklist, rather than looked at holistically to see what else they have to offer.

In addition, this methodology tails to take into account the biggest costs and most likely hurdles to success. In order to select the right tool, you have to take into account how it will work in your world.

If for example your developers spend their days in Eclipse, and you select a tool that doesn’t run in Eclipse, then you force the to spend time opening a second tool, possibly dealing with extraneous configuration. Not to mention when they get the results, they’re not in place they’re need – the code editor.

Such issues compound over time and people, carrying a tremendous burden with them. For example, about 10 years ago people got enamored with the idea of doing batch testing for things like static analysis, and then emailing the results back to developers. While this may be the simplest way to setup static analysis, it’s nearly the worst way to deal with the results. You don’t need error messages in your email client, you need them in your editor. (see my earlier post on What Went Wrong with Static Analysis?)

These are just a couple of ways you can run into trouble. I’m doing a webinar at Parasoft about this on September 30th registration is free. Stop by and check it out if you get a chance.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.