We all knows that automotive software is becoming increasingly complex. It’s gotten to the point that high-end cars not only have more code than jet fighter aircraft, but a LOT more code – in some cases as much as 100 million lines of code. Given that the automobile is a complex creation with lots of smart parts talking on multiple buses, trying to ensure that it’s bug-free is a frustrating and difficult task.
Anyone who knows me knows that I’m a huge proponent of software-development-as-engineering. This means that instead of simply chasing bugs and trying to test quality into a product, we change the way we build software and start by producing code that is less susceptible to bugs. Static analysis is the way to do this. For several years now a few vendors have been pushing the idea that static analysis is only for finding bugs, but it’s real power is in prevention. If you want your car to not have serious problems when it rolls out the door, static is your best friend.
Last week Adam Trujillo and I wrote an article in Embedded Computing Design detailing three simple static analysis rules to get you a jump-start into producing better automotive software. As it turns out there are a few MISRA rules that end up preventing a large number of very common and potentially dangerous problems such as buffer overflow.
It’s a short article but very practical. Give it a read and if you want to know more, be sure to let us know.
In the second part of my series on what we can do to contain and combat the recent rash of security breaches I’d like to focus the software development side. I’m going to layout some of the reasons why we’ve got such vulnerable software today and what we can do about it. Part one of this series discussed things you can do personally as a consumer to better protect yourself.
Let’s start with some of the most common reasons why we aren’t getting secure software. Here’s the short-list in no particular order:
The list is actually very intertwined, but I’ll try to separate these issues out the best I can. I’m focusing primarily on software security, rather than network or physical. They’re just as important, but we seem to be doing a better job there than in the code itself.
It seems obvious that training is critical, but in the software business nothing can be taken for granted. I’ll talk more about the myth of “software engineering” in the near future, but for now just remember that software is NOT engineering, not at most organizations. Sure, there are plenty of engineers who write software, but their engineering credentials are not for software development, and somehow they leave sound engineering practices at the door when they come to work.
Developers need to be trained in security. This means they need to understand the role of prevention by avoiding unsafe constructs and practices. They need to be able to spot ways in which their code can be vulnerable. They need to be more paranoid than they currently are. They need to know what standards and tools are out there and how to make the best use of them.
Recently I was at AppSec in Denver and had a discussion with a developer at a security company about input validation. Sadly, he was arguing that certain parts of the application were safe, because he personally hadn’t thought of a way to attack them. We MUST move past this kind of thinking, and training is where it starts.
You can’t do what you don’t know how to do.
When developers write code, they often don’t think at all about security. In the security mindset we think “How safe is this?” and “How could this be attacked?” and “What happens when things go wrong?”
Being good at software security requires a lot of expertise. Great security comes with input from database experts, networking experts, sysadmins, developers, and every other piece of the application ecosystem. As each talks about the attack surfaces they understand, the developers gain valuable information about how to secure their code.
A great resource for learning about common weaknesses and their consequences is CWE. Many have heard of the “CWE/SANS Top 25” coding standard, which is the 25 most dangerous issues out of about 800 that they have currently listed. These items help us get in the security mindset because they list weaknesses in terms of technical impact meaning what bad thing can happen to me if I leave this weakness in the code. Technical impact includes things like unwanted code execution, data exposure and denial-of-service.
Each CWE items lays out clearly why you need to worry about it. When someone tells me they don’t think a particular weakness in their code matters, I usually have them Google the name of the error like “Uncaught exception” and “CWE” and then go to the relevant CWE to show them how dangerous it can be. This is “Scared Straight” for programmers.
Thinking about security leads to secure code.
The lack of a security mindset comes from not having security as a serious requirement, or even not a requirement at all. We have to start making security part of the standard requirements for all software, and measuring it in a consistent meaningful way.
There are those who say that security isn’t a quality issue, but they’re wrong. I see their point, that security requires specialized thinking and training, but in the end it is absolutely a quality issue. If When you get hacked the first thing in peoples mind is that you’ve got poor quality.
A great thing to do is add software security requirements to your development plan. That way everyone knows what to do, and expects that it will be scheduled properly and tested. If you’ve never done security before add 3 simple requirements:
It won’t cover everything and it won’t be perfect, but it’ll get you started on a very solid foundation.
You get what you ask for. Ask for security.
Testing is important, in fact it’s critical. But we have learned for over 50 years that testing does not improve quality, it simply measures it. The old adage “You can’t test quality into a product” is equally true for software security, namely “You can’t test security into a product”.
When you’re trying to improve something like quality or security (remember, security is a quality issue) you have to make sure that you being at the beginning. Quality and security must pervade the development process. It may seem old at this point, but Deming’s 14 points is still chock full of useful effective advice. Especially point 3:
Cease dependence on inspection to achieve quality [security]. Eliminate the need for inspection on a mass basis by building quality [security] into the product in the first place.
All too often organizations are creating a security group (good idea) and only empowering them to test at the end of the cycle (bad idea). If you want your security group to be effective, they’ve got to get at the root causes behind the vulnerabilities in your software. When they find something during testing, chase it upstream, eliminate the root cause, and then eliminate all other instances of the same problem, rather than just the one you were lucky enough to find during testing.
Testing alone will not secure your software. On ounce of prevention is worth a pound of cure.
Remember to focus both internally as well as externally. Many of the current breaches are a hybrid of security and physical access. This is the hacker’s holy grail.
Follow basic well-known security practices. If they’re not well-known to you, then start with training.
Control physical access
Train and monitor for social engineering, because it still works way too often. Just try it on your own people using a friend and see how far she can get.
Never ever use default passwords. Always reset anything you buy or install. If a vendor did it for you, check their work. I know of a cable provider that uses a template based on customers addresses. Probably most of their customers don’t realize their network is essentially wide-open.
Encrypt private data. These days you have to figure that data is going to get out at some point, so just encrypt anything you wouldn’t want to share. Passwords yes, but also email address, social security numbers, etc.
Monitor for suspicious traffic and data access. Many attacks you don’t hear about are stopped this way, because someone noticed something funny going on. In some of the recent breaches monitoring was reporting bad behavior for weeks or months but no one paid attention. One organization said “We sell hammers” when told about suspicious behavior.
We must move to a more proactive approach. The current trend in static analysis is to find bugs and indeed many of the leading vendors have very fancy flavor-of-the-week approaches, (called signatures) which puts their software into the position of the same old, reactive, too-late problems of anti-virus. We must start building software that isn’t susceptible.
To be proactive, we have to train people in proper tools, processes, and techniques. Then formalize the use of that training in policies. Policies that include security best practices, requirements, and testing.
In static analysis we need to supplement the bug-finding with more preventative rules such as strict input validation rather than chasing potential tainted data. All data sources, even your own database, should be validated because otherwise what a great way to lay an easter egg. (Remember that security paranoid mindset from before?) Use prepared statements and strong validation and you can avoid getting yourself into the SQL Injection Hall of Shame.
We need to start looking for root problems rather than exploits. Take the Heartbleed problem. Despite claims to the contrary, the underlying issues were available from any serious static analysis tool that takes a preventative approach. What we didn’t have was a flavor-of-the-month static analysis rule that looked for the particular implementation. All we had was a root-cause best-practice rule not being used.
Root cause should have been enough. Weak code is weak code, whether or not we have a current exploit, which is all that the “signature” approach to security provides. The time it takes to find exploits and check their validity is not only nearly impossible from a coverage perspective (Just have a talk with Richard Bender if you don’t believe me) but is certainly more than the time to build hardened software.
That’s right, it’s both faster and easier to just code to a strict safe coding standard than it is to try and figure out that your code is safe, or chase defects from a weak unsafe application. Stop chasing bugs and work to build security in instead. Get yourself a good static analysis tool (Or use the SWAMP) and a copy of CWE and get started.
I’m heading to the Better Software Conference West in Las Vegas tomorrow. If you want to make your software better, this is the place to do it. Just bring your software along, and when you come home it’ll be all better… seriously!
Well, maybe not. But what you CAN do is come and learn lots of great things that will help you build better software. There are some great sessions planned as always, and an expo floor to talk to the companies that make the tools you need.
I have a session on Thursday at 2:15pm titled “Hardening Your Code in a Post-Heartbleed World: What Role Does Static Analysis Play?” where I’ll be talking about how to make the most out of static code analysis. If you want to move from a reactive position in cybersecurity to a proactive one, come learn how you can harden your code and prevent problems in the first place.
The conference theme this year is Achieving Safe, Effective and Reliable Software. It’s an important topic for anyone that needs to make sure their software works, like automotive, aerospace, defense, financial, medical devices, nuclear, and telecommunications.
My tutorial session is one of the “pre-sessions” on Monday Feb 24th from 1:00pm – 5:00pm. It’s called “Getting the most out of static analysis“.
Static analysis has the potential to drastically improve software quality, reduce risks associated with the software development process, and increase development team productivity. Nonetheless, many organizations adopt a static analysis tool or development testing suite of tools only to abandon it after their implementation yields noisy false-positives, increased effort, and little to no ROI. In most cases, the problem isn’t with static analysis as a concept. Unsuccessful static analysis implementations are usually the result of process failures, such as a lack of planning and a vast geek gap between business expectations and development policies.
As part of interactive workshop exercises, attendees will apply a pseudo code methodology to help them quantify the cost of analysis that can be used to weigh against risks. The goal of the interactive exercises is to determine, depending on the attendee’s application, when, if and for what components of the application is static analysis appropriate.
In this tutorial, attendees will learn
Various implementations of static analysis technologies, such as pattern-based analysis and flow analysis
How to properly configure their static analysis tools and implement the right type of static analysis for the application (agile, safety-critical, etc.)
How to ensure that static analysis tools are connected to business needs and the role of policy in aligning development activities with business expectations
How to reduce noise—static analysis violations that aren’t contributing to the progress of the application development
How to move from a debugging process to a preventative strategy
How to avoid the top 10 static analysis mistakes most organizations make
It’s going to be a whole bunch of practical information to make sure you’re doing what works best and will be able to measure ROI for your own organization. Plus we’re going to have some fun doing it. You can register at ICSQ. Hope to see you there.