Does Cloud Change Static Analysis

Look, we all know that using static analysis tools can be a real pain. In the past I’ve talked about some of the reasons people struggle with the output of AppSec tools. Similarly people struggle with using static code analysis. I even did a poll about static analysis challenges at one point.

From the feedback I’ve gotten, it seems that some people think that doing static analysis via SaaS (IE the cloud) would address the problems I’ve discussed. There are real challenges in getting the most out of your static analysis, but the claim that somehow cloud will solve them is ridiculous marketing hype – why would it change at all? Why should developers even be able to tell the difference? It doesn’t address any of the core issues. There are benefits you can get from using cloud for your static analysis aka Static Analysis as as Service (SAaaS?) such as reduction of up-front costs, saved IT costs, and easy deployment. But the most common problems are the same whether you run the tool in-house or use a service.

The core problem I mentioned was really getting developers to buy-in. They need to believe in the results because they need to fix them. Once the developers start picking and choosing what to fix, you’ve lost. They’ll spend countless hours challenging results and explaining why they’re not important – the inappropriately labeled “false positive“. Changing the method or location of how you run static analysis may have ramifications on the overall process, but it will in no way affect how developers perceive the results.

Getting the static analysis tool running is one of the first steps in a successful rollout, but from there you’ve got to do several things to make sure that you’ll get the value you expect.

Static Analysis Policy

It begins with having a clear static analysis policy. The policy should include when static analysis must be run and when it can be skipped. It also needs to cover when suppressions are acceptable, how severity level affects fix now vs fix later, what rules you must run, and how to handle legacy code. Legacy is one of the big problems – do you fix everything in your code regardless of age? Can you just fix it when you happen to be editing one of the old files? Should you only run static on the code you actually change in old files? These are real issues that will occur when you deploy and if you don’t decide what is proper, each developer will do their own thing.

Training

Developers need to be trained to use static analysis. Usually people remember to train on the mechanics of the tool, but not the further training that ensures success. Developers need to know when/how to suppress – does it go into the code or into an external system? They need to know how to find out more information about the problem. They need to understand what the severity levels mean and how it will affect their decisions.

It’s important as well that they understand the ramifications of a particular error. I’ve repeatedly had the experience of a team claiming a static analysis error was invalid when it was actually a real serious problem that they didn’t understand. Heartbleed is a classic example of this behavior. Finally your training needs to shift the mindset of the users from “static finds bugs” to “static finds bad code”. This distinction is crucial to get the most value. The “bugfinder” rules in static analysis are the proverbial tip of the iceberg. They’re only a small part of the full value. The bigger value is the rich set of coding standards that represent hundreds of man years in crafting best practices that help you harden your code and avoid problems in the first place.

Persistent Static Analysis Suppressions

Suppression handling can make or break your project. The symptom of this is developers saying things like “I keep fixing the same things.” What they mean isn’t that they’re fixing them, but they keep seeing the same violation and tagging it as invalid or acceptable every time the tool runs or versions change. They understandably view this as a stupid tool.

There are two schools of thought on suppressions. One is that they belong in an external system, whether it’s the static analysis tool itself, a file in source control, or a spreadsheet. The other idea is that they belong in the code. There are advantages to both, but I prefer the “suppressions in code” method. The benefits of this are that you never end up with issues that cause old suppressions to come back, because they’re tightly coupled to the code. A secondary benefit is that suppressions will end up in source control, you’ll know who did them, when they’re done, and if they left a comment you’ll even know why. This is really important if you operate in a compliance environment like FDA, Aircraft/DO-178B/C, or Automotive ISO 26262.

Good documentation

I’ve alluded to the idea that the docs need to explain why a particular static analysis rule is important. I’ve got several things I look for in good tool documentation.

  • Example bad code
  • Example fixed code
  • Impact – what will happen if you don’t fix this violation
  • Possible security relevance
  • Resources to learn more
  • integration to IDE – right-click on a violation to see the docs

Summary

Getting your static analysis rollout right is crucial to your success. There are many options from on-premise to cloud-based and you should carefully weight the benefits of each approach. But don’t expect the cloud to solve all the challenges you’ll face. There is no substitute for a well-planned tool deployment.

Resources

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.