Tag Archives: process

SDLC Acceleration Summit

sdlc-acceleration-summit
As I’m sure you’ve heard by now, several software companies including Parasoft are sponsoring an SDLC Acceleration Summit in San Francisco on May 13th, 2014. It’s a great place to go if you would like to figure out how to produce better software more quickly. There are sessions ranging from tools to processes to infrastructure and security. Not to mention a great selection of top-notch speakers.

And if you’re a Code Curmudgeon fan, you can get a 50% discount. All you have to do is go to the registration page and put CodeCurmudgeonVIP in the box for promotion code.

For more about it, watch the video below. Hope to see you there.

Development Testing – Is It Worth It?

I delivered a webinar yesterday as part of my day job at Parasoft . The topic was “Development Testing – Is It Worth It?”.

I talked about the reasons why Development Testing is useful, how it relates to process, policy, and how you can move from a reactive process of finding bugs to a proactive process of writing code that is resistant to bugs. As the old saying goes, an ounce of prevention is worth a pound of cure, or a week of debugging in this case.

The presentation is general, not designed to push any specific tools, so you should find it helpful. I’ve made both the powerpoint slides and audio available below. You can usually access most of the past webinars and other video content on the Parasoft site.

Take a look for yourself. If you have any comments or suggestions, feel free to mention it in the comments, or email me or reach me on twitter.

(powerpoint) (mp3 audio)

Download (PPTX, 2.52MB)

The webinar invitation read:

Development Testing: Is it Worth It?

Development Testing is a lot like exercising and eating well: pretty much everyone agrees that it’s beneficial and should be done, but few actually achieve it in practice.

A rising number of organizations are flirting with Development Testing by giving developers a static analysis tool and hoping that they use it to prevent defects. This is not unlike packing some raw broccoli and spinach in your son’s lunch box and expecting his health to improve as a result. This approach to Development Testing inevitably fails to deliver the results that organizations have been able to achieve with a comprehensive, policy-driven Development Testing process: reduced risks while cutting development time and costs.

If you can’t bear the business risk associated with defects surfacing in your organization’s software, join our webinar—Development Testing: Is it Worth It?—to learn how to get the maximum risk reduction from your investment in Development Testing. After exploring the dangers of relying on static analysis alone and the top barriers to comprehensive Development Testing, you’ll learn how Parasoft’s comprehensive Development Testing platform can help you:

  • Consistently apply a broad set of complementary Development Testing practices—static analysis, unit testing, peer code review, coverage analysis, runtime error detection, etc.
  • Accurately and objectively measure productivity and application quality
  • Drive the development process in the context of business expectations—for what needs to be developed as well as how it should be developed
  • Gain real-time visibility into how the software is being developed and whether it is satisfying expectations
  • Reduce costs and risks across the entire SDLC

The Wrong Tool

Did you ever buy something, only to find out that it just wasn’t quite right for you? I don’t mean the usual buyer’s remorse over a large purchase, like a new car. I mean you bought a sports car, and somehow missed the fact that you like to haul your motorcycle to the desert on weekends. Oops!

Not surprisingly, you’ll find people do this frequently with small purchase, for example apps for your phone. You’re hoping for a specific utility, you read a description, it sounds right so you buy it. It might even seem to work OK in simple tests. I had this happen to me recently with a small external microphone I bought for my smartphone to do audio recording. It worked for a couple of minutes, but when I tried to actually use it, the audio was garbled or non-existent for much of the recording. Argh!

Frequently, this is exactly what happens when people decide to buy development tools. They take advice from someone who has used the tool individually, or in a limited environment. When they try to test the tool, perhaps in a pilot program, everything appears fine. Then when deployment begins so do the problems. False positives, configuration problems, poor workflow… the list is seemingly endless and sadly too familiar.

What happens is that the selection process for the tool is inadequate. Most POCs (proof-of-concept) that I see are really simple bake-offs. Someone has an idea in mind of what they think a tool should do and they create the good old checklist of features. Sometimes this is done with the help of a single vendor – a recipe for disaster. Other products are categorized based on the checklist, rather than looked at holistically to see what else they have to offer.

In addition, this methodology tails to take into account the biggest costs and most likely hurdles to success. In order to select the right tool, you have to take into account how it will work in your world.

If for example your developers spend their days in Eclipse, and you select a tool that doesn’t run in Eclipse, then you force the to spend time opening a second tool, possibly dealing with extraneous configuration. Not to mention when they get the results, they’re not in place they’re need – the code editor.

Such issues compound over time and people, carrying a tremendous burden with them. For example, about 10 years ago people got enamored with the idea of doing batch testing for things like static analysis, and then emailing the results back to developers. While this may be the simplest way to setup static analysis, it’s nearly the worst way to deal with the results. You don’t need error messages in your email client, you need them in your editor. (see my earlier post on What Went Wrong with Static Analysis?)

These are just a couple of ways you can run into trouble. I’m doing a webinar at Parasoft about this on September 30th registration is free. Stop by and check it out if you get a chance.

Remembering a friend and luminary

Adam Kolawa (1957 - 2011)

Earlier this year my longtime friend/boss/partner/hunting buddy Adam Kolawa died. We worked together since 1992, before the internet went commercial. Over the last nearly twenty years I learned a lot from Adam about software and testing as well as other things.

Adam had a strong vision about what could be done with software. He was a very logical technical person and believed that the way software is created can be improved greatly. I remember learning this early on. I started at Parasoft doing database work and tech support. We had this really cool parallel processing software called Express. With it you could run software on a heterogeneous network of machines, say an IBM machine running AIX alongside a Sun machine running SunOS, and even add in a Digital machine running Ultrix. Needless to say the setup of such software could be complicated.

At one point I realized that many of the same questions were coming to us over and over again, so I put together one of those funny FAQ things that you saw with open-source software. I carefully listed the basic installation and configuration problems that might occur with steps to handle them. I was so proud of myself and showed it to Adam. His response was true to his nature. He said “Great, now make it go away.” While he was a strong proponent of good documentation (PhD’s are like that…) he felt like such information should be unnecessary. So he guided me to go and fix the software so that as many of the problems from the list as possible would be handled directly in the software.

This principle guided all the innovations to come from Parasoft since that time. Parallel processing technology morphed into memory tracking and bug-finding, always on the quest to create better software, more quickly, with less effort.

Along the way Adam wrote numerous papers, articles, and even a few books. The most notable are Automated Defect Prevention: Best Practices in Software Management and The Next Leap in Productivity. The former should be required reading for anyone trying to run a software development organization. The latter is an eye-opening look into not only improving IT but turning it into an asset rather than a cost-center.

Adam was instrumental in nearly all the patents generated at Parasoft. He had a very out-of-the box way of looking at problems and coming up with new unique solutions. I always attributed this at least in part to his physics background – why shouldn’t a man comfortable with giving the weight of the universe feel like he can generate test cases automatically by having a parser read some source code?

I mention all this partially because it’s cathartic for me, but also because STP – Software Test Professionals is currently having an open vote on the Test Luminary of the Year. Take a chance, go read Adam’s bio, and if you think like I do that he made a lasting impact on the software industry, then why not vote for him? It’s a fitting legacy for a man who dedicated his adult life to the improvement of the software development process.