Here’s another one of my completely unscientific polls – this time about AppSec. I find it interesting to know what others think about these issues – if you have something you’d love to poll the Code Curmudgeon’s readers about, let me know in the comments.
Today’s poll is about the pain points in your AppSec tools. It might be SAST, DAST, IAST or anything else. What about it makes your the most crazy?
In the second part of my series on what we can do to contain and combat the recent rash of security breaches I’d like to focus the software development side. I’m going to layout some of the reasons why we’ve got such vulnerable software today and what we can do about it. Part one of this series discussed things you can do personally as a consumer to better protect yourself.
Let’s start with some of the most common reasons why we aren’t getting secure software. Here’s the short-list in no particular order:
The list is actually very intertwined, but I’ll try to separate these issues out the best I can. I’m focusing primarily on software security, rather than network or physical. They’re just as important, but we seem to be doing a better job there than in the code itself.
It seems obvious that training is critical, but in the software business nothing can be taken for granted. I’ll talk more about the myth of “software engineering” in the near future, but for now just remember that software is NOT engineering, not at most organizations. Sure, there are plenty of engineers who write software, but their engineering credentials are not for software development, and somehow they leave sound engineering practices at the door when they come to work.
Developers need to be trained in security. This means they need to understand the role of prevention by avoiding unsafe constructs and practices. They need to be able to spot ways in which their code can be vulnerable. They need to be more paranoid than they currently are. They need to know what standards and tools are out there and how to make the best use of them.
Recently I was at AppSec in Denver and had a discussion with a developer at a security company about input validation. Sadly, he was arguing that certain parts of the application were safe, because he personally hadn’t thought of a way to attack them. We MUST move past this kind of thinking, and training is where it starts.
You can’t do what you don’t know how to do.
When developers write code, they often don’t think at all about security. In the security mindset we think “How safe is this?” and “How could this be attacked?” and “What happens when things go wrong?”
Being good at software security requires a lot of expertise. Great security comes with input from database experts, networking experts, sysadmins, developers, and every other piece of the application ecosystem. As each talks about the attack surfaces they understand, the developers gain valuable information about how to secure their code.
A great resource for learning about common weaknesses and their consequences is CWE. Many have heard of the “CWE/SANS Top 25” coding standard, which is the 25 most dangerous issues out of about 800 that they have currently listed. These items help us get in the security mindset because they list weaknesses in terms of technical impact meaning what bad thing can happen to me if I leave this weakness in the code. Technical impact includes things like unwanted code execution, data exposure and denial-of-service.
Each CWE items lays out clearly why you need to worry about it. When someone tells me they don’t think a particular weakness in their code matters, I usually have them Google the name of the error like “Uncaught exception” and “CWE” and then go to the relevant CWE to show them how dangerous it can be. This is “Scared Straight” for programmers.
Thinking about security leads to secure code.
The lack of a security mindset comes from not having security as a serious requirement, or even not a requirement at all. We have to start making security part of the standard requirements for all software, and measuring it in a consistent meaningful way.
There are those who say that security isn’t a quality issue, but they’re wrong. I see their point, that security requires specialized thinking and training, but in the end it is absolutely a quality issue. If When you get hacked the first thing in peoples mind is that you’ve got poor quality.
A great thing to do is add software security requirements to your development plan. That way everyone knows what to do, and expects that it will be scheduled properly and tested. If you’ve never done security before add 3 simple requirements:
It won’t cover everything and it won’t be perfect, but it’ll get you started on a very solid foundation.
You get what you ask for. Ask for security.
Testing is important, in fact it’s critical. But we have learned for over 50 years that testing does not improve quality, it simply measures it. The old adage “You can’t test quality into a product” is equally true for software security, namely “You can’t test security into a product”.
When you’re trying to improve something like quality or security (remember, security is a quality issue) you have to make sure that you being at the beginning. Quality and security must pervade the development process. It may seem old at this point, but Deming’s 14 points is still chock full of useful effective advice. Especially point 3:
Cease dependence on inspection to achieve quality [security]. Eliminate the need for inspection on a mass basis by building quality [security] into the product in the first place.
All too often organizations are creating a security group (good idea) and only empowering them to test at the end of the cycle (bad idea). If you want your security group to be effective, they’ve got to get at the root causes behind the vulnerabilities in your software. When they find something during testing, chase it upstream, eliminate the root cause, and then eliminate all other instances of the same problem, rather than just the one you were lucky enough to find during testing.
Testing alone will not secure your software. On ounce of prevention is worth a pound of cure.
Remember to focus both internally as well as externally. Many of the current breaches are a hybrid of security and physical access. This is the hacker’s holy grail.
Follow basic well-known security practices. If they’re not well-known to you, then start with training.
Control physical access
Train and monitor for social engineering, because it still works way too often. Just try it on your own people using a friend and see how far she can get.
Never ever use default passwords. Always reset anything you buy or install. If a vendor did it for you, check their work. I know of a cable provider that uses a template based on customers addresses. Probably most of their customers don’t realize their network is essentially wide-open.
Encrypt private data. These days you have to figure that data is going to get out at some point, so just encrypt anything you wouldn’t want to share. Passwords yes, but also email address, social security numbers, etc.
Monitor for suspicious traffic and data access. Many attacks you don’t hear about are stopped this way, because someone noticed something funny going on. In some of the recent breaches monitoring was reporting bad behavior for weeks or months but no one paid attention. One organization said “We sell hammers” when told about suspicious behavior.
We must move to a more proactive approach. The current trend in static analysis is to find bugs and indeed many of the leading vendors have very fancy flavor-of-the-week approaches, (called signatures) which puts their software into the position of the same old, reactive, too-late problems of anti-virus. We must start building software that isn’t susceptible.
To be proactive, we have to train people in proper tools, processes, and techniques. Then formalize the use of that training in policies. Policies that include security best practices, requirements, and testing.
In static analysis we need to supplement the bug-finding with more preventative rules such as strict input validation rather than chasing potential tainted data. All data sources, even your own database, should be validated because otherwise what a great way to lay an easter egg. (Remember that security paranoid mindset from before?) Use prepared statements and strong validation and you can avoid getting yourself into the SQL Injection Hall of Shame.
We need to start looking for root problems rather than exploits. Take the Heartbleed problem. Despite claims to the contrary, the underlying issues were available from any serious static analysis tool that takes a preventative approach. What we didn’t have was a flavor-of-the-month static analysis rule that looked for the particular implementation. All we had was a root-cause best-practice rule not being used.
Root cause should have been enough. Weak code is weak code, whether or not we have a current exploit, which is all that the “signature” approach to security provides. The time it takes to find exploits and check their validity is not only nearly impossible from a coverage perspective (Just have a talk with Richard Bender if you don’t believe me) but is certainly more than the time to build hardened software.
That’s right, it’s both faster and easier to just code to a strict safe coding standard than it is to try and figure out that your code is safe, or chase defects from a weak unsafe application. Stop chasing bugs and work to build security in instead. Get yourself a good static analysis tool (Or use the SWAMP) and a copy of CWE and get started.
DevOps is a movement that recognizes how tightly coupled development is to operations in modern applications. With shortened lifecycles and increasing amounts of software, the old boundary has broken down. DevOps is in some sense the sysadmin form of Agile, allowing teams to not only produce software faster, but get it deployed.
The goal is to no longer have developers throw software over the fence, which is normally followed by Ops throwing it right back because it doesn’t work. Instead the goals of Ops are embedded in the software testing and even requirements.
For DevOps to function properly, the entire release process has to be highly automated and deterministic. By deterministic, I mean that there are policy driven measurable metrics that determine the important questions surrounding application development. Namely, is the software ready? Does it have the required functionality? Will it function properly post-deployment? Is it Secure? All the things that frequently have squishy choices around them have to be controlled so they can operate in a repeatable fashion. For example, you may have a test coverage requirement of 80%, or a unit test pass percentage of 90%.
In that world, it may seem that static analysis isn’t an important topic, but it turns out it’s actually an enabling technology. You can and should use static analysis not just as a bug-finder, but to get at the root cause of software that is vulnerable to produce more robust applications.
Friday I’ll be giving a webinar for Parasoft that will talk more about the details of using static analysis and DevOps. It’s free and will be informative. Bring your questions, and I hope to see you there.
I’m heading to the Better Software Conference West in Las Vegas tomorrow. If you want to make your software better, this is the place to do it. Just bring your software along, and when you come home it’ll be all better… seriously!
Well, maybe not. But what you CAN do is come and learn lots of great things that will help you build better software. There are some great sessions planned as always, and an expo floor to talk to the companies that make the tools you need.
I have a session on Thursday at 2:15pm titled “Hardening Your Code in a Post-Heartbleed World: What Role Does Static Analysis Play?” where I’ll be talking about how to make the most out of static code analysis. If you want to move from a reactive position in cybersecurity to a proactive one, come learn how you can harden your code and prevent problems in the first place.