I've been working in Software Development at Parasoft since 1992 - which in my opinion is before the epoch (my measure being the first real use of the web). I've been involved deeply in creating software, creating software tools, and helping customers address their software problems including automotive, cybersecurity and embedded.
The views and opinions expressed herein are those of the author and do not necessarily reflect the views of anyone else on the planet. Caveat lector.
You can follow me on twitter @CodeCurmudgeon, Google+, Static Analysis for Fun and Profit, Facebook, and LinkedIn.
I spoke at the Quest 2012 conference in Chicago last week. The topic was “How To Optimize Your Existing Regression Testing”.
My presentation covers some very practical and pragmatic tips for dealing with regression testing, especially if you have legacy test suites. It’s not product centric, so it should be helpful to anyone working with regression testing. As much as possible I’ve tried to keep the suggestions abstract rather than from a developers perspective.
Take a look for yourself. If you have any comments or suggestions, feel free to mention it in the comments, or email me or reach me on twitter.
As part of my ongoing efforts to keep up with what source control, SCM, Software Configuration Management that people are using, I’ve put up a poll. I see there is a lot of talk these days about Git. If you’re using that, let me know what kind of project, is it business or open-source?
For those of you still using more than one system, you can either pick the one you use most, or you can put up to 3 answers.
I’ve got a pretty comprehensive list, but there is a place to add one I’ve missed. Anything you want to add sound off in the comments or via twitter.
Lately some have been suggesting that the internet is at risk. Much if not all of the hoopla stems from a recent interview with Sergey Brin from Google (GOOG). Brin says the biggest threats come from government crackdowns, attempts to control piracy, and “the rise of ‘restrictive’ walled gardens such as Facebook and Apple, which tightly control what software can be released on their platforms.”
If you look at the arguments, they essentially break down to “If Google can’t spy on your every behavior, then the internet will collapse.” This is because all information in applications that aren’t web-based can’t be crawled by web crawlers, and user behavior inside the application also cannot be monitored.
It sounds pretty ridiculous, when you think about it. People have been using applications for years on the desktop. Some of them are local to the desktop, others reach out and use the cloud (what we used to call the net, before that the internet, before that it was the network). Applications were, and continue to be a combination of proprietary software, commercial software, freeware, and other open-source models. What applications have usually NOT been on the desktop is ad-supported.
Much of the web has evolved itself into an old broadcast style model, IE advertising supports content. I know some will argue that the web “changes everything”, but think about it. The idea of having to put up with adds to get your news fix is nothing new at all. This is an old argument, is it better to have “free” content supported by ads, or paid content without advertising. In the modern era, we go beyond simple advertising as well. In addition to the cost of having to look at ads, people are giving up their privacy and allow advertisers to monitor their behavior. The rationalization is that this is saving them some money.
Again, it’s an old argument that is not going to be settled here, and I suspect won’t be settled at all. I prefer a world where you can choose whether or not you want ads, and pay for the content you get, or deal with advertising. Let the consumer choose. Personally, I don’t mind paying for software and content, like Netflix over Hulu. I prefer that over dealing with ads, even before the whole privacy issue came into play. But others feel differently and I don’t have a problem with that as long as I’m not forced down the same path.
What Brin is really saying is “If Google can’t spy on you, then advertising breaks down, and without advertising, the internet breaks down.” I don’t buy it. At all. If suddenly all advertising centric services were forced to simply serve up ads without regard to my exact movements, it would definitely have an affect on the bottom line of those serving up the ads. But advertising would go on. Don’t believe me? Turn on your television… see any advertising? Do they know who you are? Do they know what channel you just watched? Do they know that you called your mom during the show? Nope, and they don’t care. Actually they DO care, they’d love to have that information about you. But in absence of having the information, life goes on.
Google tries to obfuscates the issue by saying they’re against “Walled Gardens”. Of course they never address the issue that all traditional computing is “walled” in the sense that Google has no idea what you’re doing. But somehow that’s OK, while if use the same software on a tablet, it means death for the Internet. Ridiculous. There is in fact a considerable disagreement over whether Google themselves have a walled garden.
What it really means is that if strong privacy protections are put in place, Google will have to change or it will collapse, because they have no edge in selling ads over anyone else. That I believe.
Enable empirical research based on large test sets
Encourage improvement of tools
Speed adoption of tools by objectively demonstrating their use on real software
I find SATE interesting because it takes a couple of different approaches that are pretty useful to people trying to understand what static analysiscan and cannot due. One approach is to have several full-fledged applications with known bugs, and versions of the application with those bugs fixed. These have the effect of showing what static analysis tools can do in the real world. Unfortunately, they don’t help much when trying to find out what kinds of issues static analysiscan handle overall.
To do that, NIST has developed a test suite that has thousands of test cases with specific issues in them. Part of SATE is running various tools on the test applications and test suites, and then trying to analyze what they can find, how much noise they produce, etc. It’s an interesting exercise. You should check it out.
This year I was privileged give a presentation myself. I wanted to talk about some of the pragmatic aspects of actually trying to use static analysis in the real world. To that end, I created a slide show around the top 10 user mistakes, meaning things that prevent organizations from realizing the value they expected or needed from static analysis. These range from improper configuration to poor policy to dealing with noise.
Take a look for yourself. If you love or hate any of them, let me know. If you have others I missed, feel free to mention it in the comments, or email me or reach me on twitter.