Software Terms Without Definitions

I’m often bemused by words in the software industry aka computer science. It’s generally OK when industries just make up new words for something new, but in software we re-use words that have (or at least had) a real definition, and then use them completely differently. Or worse still, twist the definition just enough to make it not obvious that the meaning has changed. Sometimes even the words had a good software meaning, but it’s been killed over time – like artificial intelligence or AI.

With that in mind, and without a lot of blather, I just wanted to vent and list a bunch of them here. I’m not going to define them properly, because how could I? If I’ve missed your favorite, let me know in the comments, twitter, etc. If you disagree let me know and we can argue. 😉

The list is alphabetical, because I’m a human and think that way. If I was ordering it by capability to annoy, it’d be AI, Software engineer, and everything else beneath. Enjoy!

Agile – you’d think this had a good definition but ask around and see what happens.

API – used to have a good meaning but no longer. h/t to @keith_wilson.

Artificial Intelligence or “AI” – this term has lost all meaning. I have come to agree with Musk and others that real AI will be real dangerous, but nothing we currently call AI is “artificial intelligence” in that sense.

Computer science – You could argue this one is real as long as you apply it to hardware, but software? Forget about it. Blogs and rants on this are in the queue.

Cybersecurity – is it antivirus? firewall? software? coding?

DevOps – I thought this had a definition, but many, like my friend Theresa, disagree, so it must not.

Engine – this one is now just a noise word used to give something a fancier sounding name.

Framework – should be a great word. Was a great word. Now a marketing word.

Mock – seems simple enough, but it’s surprisingly broad. Some even think it includes service virtualization.

Platform – again a term that had a meaning once upon a time, now it just means “some package of software we sell”.

Service virtualization – this is a fair call. The original meaning of the term has been overloaded and extended and the “new” meaning has become more common in the software testing world, while the “old” meaning still holds true for hardware, deployment, and networking people.

Software engineering – please, this is one of the worst. Most people who call themselves software engineers don’t even begin to behave like engineers. If they are, what particular standards were they taught that all other with the same title were also taught? I thought so.

Standards – You think this one has a meaning, don’t you? In “engineering” standards means something. If you’re an engineer, you already know what I’m saying. If you don’t get this, you’re not an engineer.

How did this happen? Is marketing to blame? Or is it just that there is no “software science” even if there is “computer science?

Again, if you have a favorite let me know and I’ll add it to the list. If you disagree I’m always up for a good twitter argument. If we get enough I might add it as a new Hall-of-shame permanent list. I feel like I’ll come up with a bunch more myself as soon as I hit publish.

[Update – suggestions coming in already. I’m putting them in proper alphabetical place, but will reference the source.]

Hardening Your Software Webinar

I’ve long been an advocate for turning software development into software engineering. By this I mean that we need to start following known best practices and using the tools and processes that have been proven to help produce better code. It’s amazing how software developers often ignore standard things that everyone knows makes for better code.

As an effort to promote understanding I’m doing a two-part webinar series with Parasoft on this topic this Thursday the 22nd and next Thursday the 29th. Come join us and learn how getting back to the basics is a great way to harden your software and improve security, safety, and reliability.

Overview

The best way to fundamentally improve software is simply to get back to software engineering fundamentals. But reaping benefits from these fundamentals (such as static code analysis, runtime analysis, and unit testing) requires using these practices effectively, and ineffective practices persist at organizations around the world: unit test suites that are noisy are often ignored and hide real issues that will happen after deployment; static analysis that focuses on simple bug-finding instead of real defect-prevention represents a real missed opportunity and forces us to react to software issues rather than take a proactive stance.

In this two-part webinar series, we’ll go into detail on how to reap maximum benefits from fundamental software development practices, showing you how to use them effectively by leveraging Parasoft’s automated testing tools.

In the first session, we’ll concentrate on process, setup, and configuration, to provide you with actionable takeaways around:

  • How to harden your code with static code analysis to increase safety and prevent cyber attacks, including which coding standards are the best place to start
  • How to add runtime error detection to your testing process to find bugs early and avoid reliability issues in the field
  • How unit test automation reduces your effort of creating and maintaining test suites

In the second session, we’ll show you how to integrate automated testing tools into your existing software development process. You will learn how these tools can run as part of continuous integration, inside your favorite development environment. We’ll focus on:

  • How to create tests more quickly for C, C++, Java, and .NET by building on ready-made frameworks
  • How to win at continuous testing by leveraging automation and analysis
  • How to streamline compliance efforts that are normally tedious, with efficiency provided by static code analysis and unit testing

Join us June 22nd and June 29th to see for yourself how easy the fundamentals can be, and how they can help you perfect your software.

Get Started with Free Service Virtualization

Free service virtualization, sounds great! Whenever you hear free, should get nervous, I know that I do. After I wrote this title I looked at it and immediately hated it. But here’s the thing – at my day job at Parasoft we’ve just taken one of our really great products, Parasoft Virtualize, and made a free “community edition” version of it.

So who needs this and why should you care about it? Well software applications have gotten a lot more complex in the last decade. Time was you had a simple monolithic desktop application and that’s all you had to worry about. Some of them had a little connectivity, like to a database or maybe simple external dependencies, but mostly they stood on their own. Today’s “applications” look more like systems or even systems-of-systems. It’s not uncommon to have a relatively small core application but surrounded by a plethora of dependencies like databases, cloud APIs to provide data, shipping services, payment services and even connections to physical devices in the real world – the Internet of Things or IoT.

That’s where the “service virtualization” technology comes in. I know, I know, it’s a horrible name and it’s already caused you to think it’s something other than what it is. Nothing I can do about it, that name is in use by the analysts and I have no control over it. I think of it more like “communication emulation” in that it emulates the communication. Think of it this way, instead of APIs linked into an application as part of the compilation processed, we now have services that are accessed live dynamically – meaning we talk to them and they talk to us. Even in the IoT world of SmartHome or SmartFactory or SmartCity it’s all about pieces talking to each other. This gives as remote info, remote control, and even some degree of autonomous decision making – like the NEST thermostat. Initially I used the app to control the thermostat to my liking, now it just figures out what I was doing and mostly does it for me.

Testing these kinds of systems is a huge pain. You need a test lab that has one of everything you’re connecting to. If you’re updating some of them, then you need a lab with the old one AND the new one – like a new version of Oracle or MySQL. Setting up the lab costs time and money, and then I have to fight with other teams to use it. Service Virtualization let’s me make fake (virtual) versions of the things I depend on, and then use them to test instead of needing the real thing.

This not only makes it faster/easier/cheaper to test, but it frees IT to do other important things. Plus I can make these virtual things behave how I want them to – if I want them to flood the network, they will. If I want them to be fast or slow to respond, I can do that. If I want one of them to be a bad actor and pretend it’s been compromised, no problem. My testing will be more thorough in addition to easier.

Once you realize that service virtualization technology is for you, the next step is to choose a tool. Lot’s of people instantly go check open-source, because of course it “doesn’t cost anything”. I’ve done a pretty thorough check of all open-source SV tools and at the moment they’re only really useful if your whole world is centered in http/https. Even then there are lots of other features like using a UI to create, manage, and deploy the virtual assets. So now that Parasoft created a free version, why not see what commerical software offers you? You can download it here.

Try it, you’ll like it.

Does Cloud Change Static Analysis

Look, we all know that using static analysis tools can be a real pain. In the past I’ve talked about some of the reasons people struggle with the output of AppSec tools. Similarly people struggle with using static code analysis. I even did a poll about static analysis challenges at one point.

From the feedback I’ve gotten, it seems that some people think that doing static analysis via SaaS (IE the cloud) would address the problems I’ve discussed. There are real challenges in getting the most out of your static analysis, but the claim that somehow cloud will solve them is ridiculous marketing hype – why would it change at all? Why should developers even be able to tell the difference? It doesn’t address any of the core issues. There are benefits you can get from using cloud for your static analysis aka Static Analysis as as Service (SAaaS?) such as reduction of up-front costs, saved IT costs, and easy deployment. But the most common problems are the same whether you run the tool in-house or use a service.

The core problem I mentioned was really getting developers to buy-in. They need to believe in the results because they need to fix them. Once the developers start picking and choosing what to fix, you’ve lost. They’ll spend countless hours challenging results and explaining why they’re not important – the inappropriately labeled “false positive“. Changing the method or location of how you run static analysis may have ramifications on the overall process, but it will in no way affect how developers perceive the results.

Getting the static analysis tool running is one of the first steps in a successful rollout, but from there you’ve got to do several things to make sure that you’ll get the value you expect.

Static Analysis Policy

It begins with having a clear static analysis policy. The policy should include when static analysis must be run and when it can be skipped. It also needs to cover when suppressions are acceptable, how severity level affects fix now vs fix later, what rules you must run, and how to handle legacy code. Legacy is one of the big problems – do you fix everything in your code regardless of age? Can you just fix it when you happen to be editing one of the old files? Should you only run static on the code you actually change in old files? These are real issues that will occur when you deploy and if you don’t decide what is proper, each developer will do their own thing.

Training

Developers need to be trained to use static analysis. Usually people remember to train on the mechanics of the tool, but not the further training that ensures success. Developers need to know when/how to suppress – does it go into the code or into an external system? They need to know how to find out more information about the problem. They need to understand what the severity levels mean and how it will affect their decisions.

It’s important as well that they understand the ramifications of a particular error. I’ve repeatedly had the experience of a team claiming a static analysis error was invalid when it was actually a real serious problem that they didn’t understand. Heartbleed is a classic example of this behavior. Finally your training needs to shift the mindset of the users from “static finds bugs” to “static finds bad code”. This distinction is crucial to get the most value. The “bugfinder” rules in static analysis are the proverbial tip of the iceberg. They’re only a small part of the full value. The bigger value is the rich set of coding standards that represent hundreds of man years in crafting best practices that help you harden your code and avoid problems in the first place.

Persistent Static Analysis Suppressions

Suppression handling can make or break your project. The symptom of this is developers saying things like “I keep fixing the same things.” What they mean isn’t that they’re fixing them, but they keep seeing the same violation and tagging it as invalid or acceptable every time the tool runs or versions change. They understandably view this as a stupid tool.

There are two schools of thought on suppressions. One is that they belong in an external system, whether it’s the static analysis tool itself, a file in source control, or a spreadsheet. The other idea is that they belong in the code. There are advantages to both, but I prefer the “suppressions in code” method. The benefits of this are that you never end up with issues that cause old suppressions to come back, because they’re tightly coupled to the code. A secondary benefit is that suppressions will end up in source control, you’ll know who did them, when they’re done, and if they left a comment you’ll even know why. This is really important if you operate in a compliance environment like FDA, Aircraft/DO-178B/C, or Automotive ISO 26262.

Good documentation

I’ve alluded to the idea that the docs need to explain why a particular static analysis rule is important. I’ve got several things I look for in good tool documentation.

  • Example bad code
  • Example fixed code
  • Impact – what will happen if you don’t fix this violation
  • Possible security relevance
  • Resources to learn more
  • integration to IDE – right-click on a violation to see the docs

Summary

Getting your static analysis rollout right is crucial to your success. There are many options from on-premise to cloud-based and you should carefully weight the benefits of each approach. But don’t expect the cloud to solve all the challenges you’ll face. There is no substitute for a well-planned tool deployment.

Resources

Ranting about Software, Security and Tech