All posts by Code Curmudgeon

I've been working in Software Development at Parasoft since 1992 - which in my opinion is before the epoch (my measure being the first real use of the web). I've been involved deeply in creating software, creating software tools, and helping customers address their software problems including automotive, cybersecurity and embedded. The views and opinions expressed herein are those of the author and do not necessarily reflect the views of anyone else on the planet. Caveat lector. You can follow me on twitter @CodeCurmudgeon, Google+, Static Analysis for Fun and Profit, Facebook, and LinkedIn.

Get Started with Free Service Virtualization

Free service virtualization, sounds great! Whenever you hear free, should get nervous, I know that I do. After I wrote this title I looked at it and immediately hated it. But here’s the thing – at my day job at Parasoft we’ve just taken one of our really great products, Parasoft Virtualize, and made a free “community edition” version of it.

So who needs this and why should you care about it? Well software applications have gotten a lot more complex in the last decade. Time was you had a simple monolithic desktop application and that’s all you had to worry about. Some of them had a little connectivity, like to a database or maybe simple external dependencies, but mostly they stood on their own. Today’s “applications” look more like systems or even systems-of-systems. It’s not uncommon to have a relatively small core application but surrounded by a plethora of dependencies like databases, cloud APIs to provide data, shipping services, payment services and even connections to physical devices in the real world – the Internet of Things or IoT.

That’s where the “service virtualization” technology comes in. I know, I know, it’s a horrible name and it’s already caused you to think it’s something other than what it is. Nothing I can do about it, that name is in use by the analysts and I have no control over it. I think of it more like “communication emulation” in that it emulates the communication. Think of it this way, instead of APIs linked into an application as part of the compilation processed, we now have services that are accessed live dynamically – meaning we talk to them and they talk to us. Even in the IoT world of SmartHome or SmartFactory or SmartCity it’s all about pieces talking to each other. This gives as remote info, remote control, and even some degree of autonomous decision making – like the NEST thermostat. Initially I used the app to control the thermostat to my liking, now it just figures out what I was doing and mostly does it for me.

Testing these kinds of systems is a huge pain. You need a test lab that has one of everything you’re connecting to. If you’re updating some of them, then you need a lab with the old one AND the new one – like a new version of Oracle or MySQL. Setting up the lab costs time and money, and then I have to fight with other teams to use it. Service Virtualization let’s me make fake (virtual) versions of the things I depend on, and then use them to test instead of needing the real thing.

This not only makes it faster/easier/cheaper to test, but it frees IT to do other important things. Plus I can make these virtual things behave how I want them to – if I want them to flood the network, they will. If I want them to be fast or slow to respond, I can do that. If I want one of them to be a bad actor and pretend it’s been compromised, no problem. My testing will be more thorough in addition to easier.

Once you realize that service virtualization technology is for you, the next step is to choose a tool. Lot’s of people instantly go check open-source, because of course it “doesn’t cost anything”. I’ve done a pretty thorough check of all open-source SV tools and at the moment they’re only really useful if your whole world is centered in http/https. Even then there are lots of other features like using a UI to create, manage, and deploy the virtual assets. So now that Parasoft created a free version, why not see what commerical software offers you? You can download it here.

Try it, you’ll like it.

Does Cloud Change Static Analysis

Look, we all know that using static analysis tools can be a real pain. In the past I’ve talked about some of the reasons people struggle with the output of AppSec tools. Similarly people struggle with using static code analysis. I even did a poll about static analysis challenges at one point.

From the feedback I’ve gotten, it seems that some people think that doing static analysis via SaaS (IE the cloud) would address the problems I’ve discussed. There are real challenges in getting the most out of your static analysis, but the claim that somehow cloud will solve them is ridiculous marketing hype – why would it change at all? Why should developers even be able to tell the difference? It doesn’t address any of the core issues. There are benefits you can get from using cloud for your static analysis aka Static Analysis as as Service (SAaaS?) such as reduction of up-front costs, saved IT costs, and easy deployment. But the most common problems are the same whether you run the tool in-house or use a service.

The core problem I mentioned was really getting developers to buy-in. They need to believe in the results because they need to fix them. Once the developers start picking and choosing what to fix, you’ve lost. They’ll spend countless hours challenging results and explaining why they’re not important – the inappropriately labeled “false positive“. Changing the method or location of how you run static analysis may have ramifications on the overall process, but it will in no way affect how developers perceive the results.

Getting the static analysis tool running is one of the first steps in a successful rollout, but from there you’ve got to do several things to make sure that you’ll get the value you expect.

Static Analysis Policy

It begins with having a clear static analysis policy. The policy should include when static analysis must be run and when it can be skipped. It also needs to cover when suppressions are acceptable, how severity level affects fix now vs fix later, what rules you must run, and how to handle legacy code. Legacy is one of the big problems – do you fix everything in your code regardless of age? Can you just fix it when you happen to be editing one of the old files? Should you only run static on the code you actually change in old files? These are real issues that will occur when you deploy and if you don’t decide what is proper, each developer will do their own thing.

Training

Developers need to be trained to use static analysis. Usually people remember to train on the mechanics of the tool, but not the further training that ensures success. Developers need to know when/how to suppress – does it go into the code or into an external system? They need to know how to find out more information about the problem. They need to understand what the severity levels mean and how it will affect their decisions.

It’s important as well that they understand the ramifications of a particular error. I’ve repeatedly had the experience of a team claiming a static analysis error was invalid when it was actually a real serious problem that they didn’t understand. Heartbleed is a classic example of this behavior. Finally your training needs to shift the mindset of the users from “static finds bugs” to “static finds bad code”. This distinction is crucial to get the most value. The “bugfinder” rules in static analysis are the proverbial tip of the iceberg. They’re only a small part of the full value. The bigger value is the rich set of coding standards that represent hundreds of man years in crafting best practices that help you harden your code and avoid problems in the first place.

Persistent Static Analysis Suppressions

Suppression handling can make or break your project. The symptom of this is developers saying things like “I keep fixing the same things.” What they mean isn’t that they’re fixing them, but they keep seeing the same violation and tagging it as invalid or acceptable every time the tool runs or versions change. They understandably view this as a stupid tool.

There are two schools of thought on suppressions. One is that they belong in an external system, whether it’s the static analysis tool itself, a file in source control, or a spreadsheet. The other idea is that they belong in the code. There are advantages to both, but I prefer the “suppressions in code” method. The benefits of this are that you never end up with issues that cause old suppressions to come back, because they’re tightly coupled to the code. A secondary benefit is that suppressions will end up in source control, you’ll know who did them, when they’re done, and if they left a comment you’ll even know why. This is really important if you operate in a compliance environment like FDA, Aircraft/DO-178B/C, or Automotive ISO 26262.

Good documentation

I’ve alluded to the idea that the docs need to explain why a particular static analysis rule is important. I’ve got several things I look for in good tool documentation.

  • Example bad code
  • Example fixed code
  • Impact – what will happen if you don’t fix this violation
  • Possible security relevance
  • Resources to learn more
  • integration to IDE – right-click on a violation to see the docs

Summary

Getting your static analysis rollout right is crucial to your success. There are many options from on-premise to cloud-based and you should carefully weight the benefits of each approach. But don’t expect the cloud to solve all the challenges you’ll face. There is no substitute for a well-planned tool deployment.

Resources

Top 10 Ways to Spot a Cybersecurity Expert

When you’re looking for a cybersecurity expert it’s important to be able to spot who knows what they’re doing and who doesn’t. Well in this case the title of the post is a bit of click-bait. Got you, didn’t I? This is really how to spot someone who is NOT a cybersecurity expert. Probably I should have titled it Ten Ways to Spot a Cybersecurity Fake. Let’s take a serious topic and have a bit of fun at the same time. Here’s the list.

#10 – Mobile phone is more than a year old
You just can’t push updates to old phones. Unfortunately this is as true for security patches as it is for bug fixes. If you want to be secure you’ve got to keep it patched, and to keep it patched you’ve got to have current hardware. In the smartphone world, this means your phone is less than 12 months old. An “expert” who carries a crappy phone isn’t paranoid secure enough for me.
#9 – Still carrying a Blackberry
The internet age moves fast and you have to keep up. Blackberry is a bit of a dinosaur and you’re just not getting all the latest that you get from more agile vendors. Avoid dinosaurs when looking for technical help, they simply won’t be aware of the latest threats and rely on outdated models of security.
#8 – Wears a suit
In the IT industry nothing says sales rep like a suit does. Now this person might understand the need and value of enhanced cybersecurity, but they don’t know what you really need to do. If they’re not a sales rep, then they’re probably just a dinosaur, because tech people don’t wear suits anymore. See above.
#7 – Wears a tie
Do I really have to explain? Have you ever met someone who really got cybersecurity who was wearing a tie? See above. (Sorry Kevin – you’re the exception. You rock the cravat.)
#6 – Uses open wifi
Any security professional worth their salt is deathly afraid of open wifi. It doesn’t matter if it’s a hotel, a coffee shop, or an airport. Cyberpeople carry their own internet in their pocket.
#5 – Never uses cash
Between the Target hack and ATM skimmers at the gas pump, a healthy dose of paranoia when it comes to credit cards is a good idea. I’ve gone back to using cash a lot more than I used to and you should too.
#4 – Thinks eight characters is enough for a password
Seriously, rainbow tables people. If your password is leaked in a data breach it can take as little as a couple of milliseconds to crack an 8 character password. If they don’t know this, then their knowledge is years out-of-date.
#3 – Thinks funny characters you wont’t remember are good for passwords
I’m sorry but *#*%^)-} isn’t a great password. You will never be able to remember it, you’ll write it down and anyway it’s in a rainbow table so it’s not much better than 12345. You’re better off which an unbelievably long password you can remember that has a few funny tweaks than 8 pieces of gibberish.
#2 – Doesn’t wear glasses
Anyone spending their life on a computer has killed their eyes. If they’re not spending their life on the computer, they’re not passionate enough. You want someone who prefers the internet to real life. To paraphrase OrwellFour eyes good, two eyes bad
#1 – Doesn’t use the command line
Everyone with a hacker mentality uses the command line, regardless of operating system. Anyone without a hacker mentality isn’t qualified to be working in cybersecurity.

I warned you up front we were going to have some fun with this, and hopefully you did. But in reality some of these tips will help you vet your cybersecurity expert. Even just tossing some of the terms above at them to see how they respond may tell you something. If they use a term you don’t know make them explain it – if they can’t explain it they probably don’t understand it very well.

If you don’t know enough to tell a real expert from a fake, get help from someone you can trust, and stay safe out there!

Open-Source Project Activity Demystified

Open-source projects are spread across a wide spectrum of maturity and activity. When choosing to use open-source it’s important to select a project that has lots of active contributors and recent development unless you’re expecting to take on the project development yourself.

Determining project activity can be done by looking at project statistics such as GitHub provides. Often projects are started by a single individual who has a particular problem they want/need to solve. Once the software is “working” the project can stagnate. A few select projects reach a critical mass where multiple contributors work to keep the project up to date, fix bugs, add features and create a large useful popular project.

Open-source activity basics

Here we will compare a small semi-active project Netflix curator with an active popular one, Angular.js to see how you can tell the difference. First, there are three basic statistics at the top of every GitHub project: Watch, Star and Fork.


Watch is the number of people who have added the project to their watchlist. This gives them updates about the project and is an indication of the number of people who care about changes to the code, rather than just use the project.

Star is the number of people who find a project interesting and want to indicate that. It also adds a bookmark for favorite projects.

Fork is the number of people who have cloned the repository with the intention of adding their own changes to it. Often times such people don’t actually contribute but it shows a level of interest in contributing.

Notice that the very popular and active Angular.js project has over ten times as many watchers as Netflix curator. As for Forks, Angular.js has an even bigger margin over Netflix curator – almost one thousand times as many forks.

Contributors

A second area to look is the “Graphs” tab which shows graphically information about contributors, frequency of code changes, etc. The graphs below show the contributors to each project.

Graph of top contributors to angular.js project
Angular.js top contributors

Notice that the top 4 contributors to Angular.js each have tens of thousands of commits. The list of significant contributors is quite large which not only provides a wealth of ideas for new features but also reduces risk when a contributor leaves the project.

In contrast, the top 4 contributors to Netflix curator quickly drops to less than 100 commits – again a difference of almost one thousand times. If the main contributor leaves, or grows bored and moves on to something else, the project is completely stagnant – if you want anything you’ll need to do it yourself.

Graph of top contributors to netflix curator project
Netflix curator top contributors

Code change frequency

Next we can look at the frequency of code change. The Netflix curator exhibits a common tendency for a project to stagnate at some point as it has the basics of the desired functionality from the single original contributor.

Graphs of code update frequency for netflix curator project
Netflix curator code update frequency

A larger set of contributors with more ideas and free time helps to keep a project vibrant as you can see with the Angular.js project. Studies have shown that larger and more complex open-source projects tend to attract more developers.

Graphs for code update frequency for angular.js project
Angular.js code update frequency

Network / Project forks

Finally, we can check the network graphs to see how many people are forking the project and doing something new with it, which is a telling indicator of how many people are really interested in the project and want to do their own thing with it. Note here that we have only a couple of forks for Netflix curator that were never merged back in,

graph showing how many forks there are for netflix curator project
Netflix curator network forks

while the Angular.js project has too many forks to display.

message that there are too many forks to display
angular.js network forks

At any given time you can quickly see which repositories are most active by checking https://bithub-ranking.com and an explanation of the GitHub statistical graphs are available at https://help.github.com/articles/about-repository-graphs/.

While most typical open-source projects won’t make the most-popular list, doing a bit of investigation into the health of an open-source project can help make sure that the code you’re using will be maintained and updated to keep up with emerging technologies for years to come.