Category Archives: Security

Security related issues.

Why We View AppSec Vulnerabilities As False

Firefighters in the lobby, false alarm, 4am. I’d like to say a few things as a follow-up to my article on theoretical appsec vulnerabilities last week. The article generated some interesting conversation about the idea. Jeff Williams seemed to asking what I thought was an interesting question, namely why do people push back on security analysis results? His take is that accuracy and false-alarms are the big reason and that tool vendors need to do better.

So let’s take a look at it. I’ve put up a new poll that takes a look at this question, so feel free to pop over there and let me know what you think. I’ll write about the results in the near future. In a previous poll about static analysis results in general, the largest answer was training, followed up by false positives and management.

In no particular order, here are some reasons why people justify ignoring the results from their security scanners.

Noise

Noise, noise, noise! However you call it, whether it’s false positives, false alarms, theoretical problems, too many messages or anything else, noise is a big pain. Jeff says accuracy matters and indeed it does. But in my experience developers are awfully quick to pull the “noise” trigger. It would be fun to do a poll/quiz with a couple of real code samples and see what developers think. The basic application security quiz at the AppSec Notes blog but doesn’t expose the knowledge issues around recognizing real security vulnerabilities in actual code. In other words, would you recognize why a piece of code is dangerous if you saw it? I’d like to see the quiz that has both good code and bad code examples and the question “security problem or ok”. This is trickier than it sounds at first because you can’t just have bad code and fixed code, you need bad code and then suspicious code that would likely raise a false alarm. Now there’s a real test. If anyone knows of such a beast or wants to build one, make sure to let me know.

Noise can be made even worse if you have false objectives and metrics. For example if you base success on “number of issues found” you’re likely to find the greatest number of unimportant issues. There are better definitions of success discussed in the presentation AppSec Broken Window Theory that is worth reading. One quick example is to track success by the number of “flaws not found” in new applications.

One other point on noise, particularly in the area of what people like to call false positives. I have the “Curmudgeon Law of Static Analysis Tool Usefulness”. The more clever a static analysis tool is, the more likely it’s output will be perceived as a false positive. This is because it’s easy to accept a result you understand, but if a tool tries to tell you something you don’t know (IE the most useful result) you are least likely to accept it. That’s why I keep pushing on a) better presentation of results to explain WHY it matters and b) better training.

Bad Workflow or Process

This is closely related to the noise problem. If you have a bad workflow, process, or configuration you will find your security scanning painful. Sometimes this pain is noise, sometimes it’s extra effort, slow work or other symptoms. For example I’d seen people scan static analysis on legacy code where the corporate policy was “no changes unless there is a field-reported bug”. Scanning code you don’t intend to fix or looking for items you don’t intend to fix certainly contributes to noise, effort required and costs.

The takeaway is make sure your tools are configured optimally and the way they integrate into your process is not causing headaches. I could go on a long time on this topic alone but we’ll save that for another day. How can you tell if it’s causing headaches? Ask those who are using the tools and those who are required to address the output of the tools.

Another great read on this topic is Does Progress Come From Security Products or Process where Gunnar Peterson says “Process engineering is deeply unsexy. It is about as far removed from the glamour, fashion show world of security conferences as you can possibly imagine, but its where the actual changes occur.

Prioritization

Again this is a category closely related to noise. Noise is a symptom of a lack of prioritization in your appsec findings. Ideally you’re storing all the output from your all the security tools in your arsenal in an intelligent data-driven system that will help you determine which items are most important. No one wants to spend weeks fixing something that is unlikely to happen and even if it did the consequences are minimal.

AppSec prioritization must take risk into account. Will this happen? Does this happen today? How hard is it to exploit? How hard is it to prevent? Plus all the other usual risk management questions. You know what to do, if you have too much noise from your tools, you need to put some intelligence behind it to get to what matters.

Note that when I say prioritization I don’t necessarily mean triage. Triage implies a human driven process (see bottleneck) that makes painful tradeoffs in a short time span rather than an orderly thought-out process. In the Broken Windows presentation I mentioned earlier Erik Peterson says “Triage != Application Security Program” and I wholeheartedly agree.

This is why I’m a fan of the attempt to develop comprehensive risk application frameworks. Some of these are the Common Weakness Scoring System (CWSS) that is managed at Mitre. CWSS proposes a way to properly weigh security findings in the context of your application, which should enable better automated prioritization. This goes hand-in-hand with the Common Weakness Risk Analysis Framework (CWRAF).

I know they’re far from perfect and in fact are frequently way too complicated and painful to use. Research in this area will ensure continued improvement and I expect it to ultimately become the critical driver in the appsec scanning space.

Test Security In

We’ve all heard the adage that you “can’t testing quality into an application”. This is based on Deming’s 3rd principle “cease dependence on inspection to achieve quality.” Early one many disagreed with Deming but today it’s an accepted truth that inspection alone will not solve your problems. Yet somehow in AppSec many believe that we can “test security into an application.” This attitude is easy to recognize because it results in security processes that are completely orthogonal to development, and are highlighted by all security activities being post-development QA-like activities. That method didn’t work for quality and won’t work for security.

In order to get ahead of the software security issues we have to start building secure software in the first place. There’s a great site sponsored by the US government called Build security in and it has lots of great information to get you going. Also take a look at the Building Security In Maturity Model. I realize that it’s easier to say “build it in” than it is to actually do and that building it in brings it’s own challenges. But like manufacturing and quality, it’s the only way to get long-term sustainable improvement.

Training

The SANS institute did a survey on application security programs and practices and found

“Although suppliers continue to improve the speed and accuracy of SAST tools and make them easier to use, developers need security training or expert help to understand what the tools are telling them, which vulnerabilities are important, why they need to be fixed and how to fix them.”

The report notes that only 26% of organizations had secure coding training that was working well or mandated. Further it reads

“A lack of knowledge and skills is holding back Appsec programs today, and it is preventing organizations from making real progress in Appsec in the future. The number one obstacle to success reported in this year’s survey is a shortage of skilled people, part of a bigger problem facing the IT security industry in general, as recent studies by Forrester Research13 and (ISC)214 show.

Training and education are needed to address this skills shortage—not just training more Infosec and Appsec specialists, but training developers and managers, too. Fewer than one-quarter of respondents have training programs that are ongoing and working well, and secure coding training ranks low in the list of practices that organizations depend on in their Appsec programs today. This needs to change. ”

OWASP talks about the importance of training

“From information security perspective, the holistic approach toward application security should include for example security training for software developers as well as security officers and managers…”

A survey in 2010 showed that security training was not being done and I suspect little has changed today.

“Nearly 80% of personnel at government agencies and contractors said their organization does not provide sufficient training and guidance for software security application development and delivery, according to a new survey.”

Summary

Tools alone and tool accuracy won’t fully answer the problems of AppSec. again from the Sans survey

“There aren’t any next generation tools or other silver bullets on the horizon that will solve the problem of secure software. Writing secure software is about fundamentals: thoughtful design, careful coding, disciplined testing and informed and responsible management. The sooner that organizations understand this—and start doing it—the sooner they will solve their security problems. ”

False alarms and accuracy are a huge problem. So is the lack of training. I wouldn’t want to argue that one is more important than the other, because I could easily find myself arguing both sides. Both are important. Until we have a perfect security tool, meaning no false alarms and equally important no false negatives training will play a necessary integral role to application security (appsec) and software security (swsec).

Unscientific AppSec Pain Poll

Here’s another one of my completely unscientific polls – this time about AppSec. I find it interesting to know what others think about these issues – if you have something you’d love to poll the Code Curmudgeon’s readers about, let me know in the comments.

Today’s poll is about the pain points in your AppSec tools. It might be SAST, DAST, IAST or anything else. What about it makes your the most crazy?

AppSec Resources

For articles related to this post see:
Theoretical AppSec Vulnerabilities
Why We View AppSec Vulnerabilities as False

For more security info check out the security resources page and a few of these books can help.
Embedded Systems Security: Practical Methods for Safe and Secure Software and Systems Development,

Platform Embedded Security Technology Revealed: Safeguarding the Future of Computing with Intel Embedded Security and Management Engine,

Software Test Attacks to Break Mobile and Embedded Devices (Chapman & Hall/CRC Innovations in Software Engineering and Software Development Series)

Theoretical AppSec Vulnerabilities

As you’re well aware cybersecurity and appsec incidents are a regular feature in the news. I try to avoid jumping immediately on the analysis bandwagon, preferring instead to wait for a deeper understanding of what went wrong so we can think about how to avoid it in the future.

In this case I’d like to talk about the Superfish breach that was discovered on Lenovo laptops earlier this year.

In brief, Lenovo (like many if not most hardware manufacturers) chose to install software that allows them to advertise to their customers. I don’t wish to pick on them particularly in this case as the problem is rampant especially in the mobile arena. Rather lets think about Lenovo as a cautionary tale in how things can go wrong. The Superfish software installed on their laptops allows them to actually inject ads into the user stream. It did this by installing it’s own self-signed security certificate.

For those unfamiliar with the role of certificates in secure communication, certificates are supposed to certify that you are in fact who you say you are. This gives me confidence when my browser says I’m buying something from Amazon that it’s really Amazon and not some funny phishing web site trying to steal from me. I say supposed because you’re allowed to create a self-signed certificate which in essence is saying “I am who I say I am.” If this sounds scary, then you understand it.

The problem is even broader when it’s a self-signed root certificate. At this point you now have the person issuing the root certificate telling you “Everyone is who I say they are.” In other words, the certificate can convince your browser that Bob’s Criminal Bank™ is really your local bank. TNW News said “There is simply no reason for Superfish — or anyone else — to install a root certificate in this manner” and I wholeheartedly agree. TNW News again: “Either Superfish’s intent in installing one was malicious or due to sloppy development.”

Reasonable people can disagree on the tradeoff between advertising and privacy and ad-supported benefits like free email. This method of advertising however is beyond the scope of any responsible behavior. It puts users at great risk without any additional benefit. What reasonable people don’t do is put others at risk for their own benefit. Breaking security by stepping into the middle of secure transactions isn’t reasonable.

What’s interesting (or disturbing depending on your point of view) is how Lenovo responded to this problem. When researchers published the problem, the CTO of Lenovo said “We’re not trying to get into an argument with the security guys. They’re dealing with theoretical concerns. We have no insight that anything nefarious has occurred.”

Obviously I can’t tell you how much of their statement was spin and how much they really believed, I hope it was mostly the former. This is a great example of how not to have cybersecurity. It goes to the core of what we currently have so many appsec / swsec problems today. People want to simply patch up systems after a breach has occurred, rather than build fundamentally secure software using sound principles.

CSO online describes the reality quite well: “It’s a classic example of a Man-in-the-Middle attack, one that wouldn’t be too difficult to conduct based on the design of the software and its security protocols. Worse, the risk remains even after the user uninstalls the Visual Discovery software.”

The idea that a vulnerability is merely theoretical is not only ignorant but dangerous. Software exploits occur because bad actors operate by finding unexpected loopholes in a software system. Think of it this way – if you left your door unlocked is it a security issue? Or perhaps “If an unlocked door is never entered, is it really unlocked” if you’re a philosopher. One could contend that the risk is theoretical, but most of us would say that such a statement is ridiculous. (Props to those who live in an area where door security isn’t required.)

Software vulnerabilities are exactly like unlocked doors. They are not theoretical, they are actual whole or openings in your software than can and probably will be exploited. If we went to have secure applications we must grow up and stop pretending that the vulnerabilities we’re facing aren’t real. We have to move a proactive preventative engineering based approach that treats vulnerabilities as real risks, only then can we be secure.

[Update – Jeff Williams who I respect a lot as a long-term advocate of software security had a comment about this. For some reason my comment system currently seems broken, so I”m posting a link to his response on LinkedIn. I do agree that tool noise or false alarm rates are problematic in security analysis tools, but I don’t agree that this lets company representatives and developers off the hook for claiming theoretical.]

[Update – I get that there are exploits that are theoretical. My point is that the label is misused by those who are trying to avoid fixing a problem or taking responsibility. For example early Heartbleed responses were full of “it’s theoretical“, which was also true for the airline wifi attack demonstrated against United. I certainly don’t disagree that there is such a thing as a theoretical exploit, but sometimes we spend more effort trying to disprove the exploit than the effort that would be necessary to fix it.]

[Update – I wrote a follow-up discussing why people might call a vulnerability false or theoretical.]

Resources

Closing the Barn Door – Software Security

All the ways that hackers can get in
All the ways that hackers can get in
In the second part of my series on what we can do to contain and combat the recent rash of security breaches I’d like to focus the software development side. I’m going to layout some of the reasons why we’ve got such vulnerable software today and what we can do about it. Part one of this series discussed things you can do personally as a consumer to better protect yourself.

Let’s start with some of the most common reasons why we aren’t getting secure software. Here’s the short-list in no particular order:

  • Training
  • Security mindset
  • Not required
  • Test-it-in mentality

The list is actually very intertwined, but I’ll try to separate these issues out the best I can. I’m focusing primarily on software security, rather than network or physical. They’re just as important, but we seem to be doing a better job there than in the code itself.

Training
It seems obvious that training is critical, but in the software business nothing can be taken for granted. I’ll talk more about the myth of “software engineering” in the near future, but for now just remember that software is NOT engineering, not at most organizations. Sure, there are plenty of engineers who write software, but their engineering credentials are not for software development, and somehow they leave sound engineering practices at the door when they come to work.

Developers need to be trained in security. This means they need to understand the role of prevention by avoiding unsafe constructs and practices. They need to be able to spot ways in which their code can be vulnerable. They need to be more paranoid than they currently are. They need to know what standards and tools are out there and how to make the best use of them.

Recently I was at AppSec in Denver and had a discussion with a developer at a security company about input validation. Sadly, he was arguing that certain parts of the application were safe, because he personally hadn’t thought of a way to attack them. We MUST move past this kind of thinking, and training is where it starts.

You can’t do what you don’t know how to do.

Security mindset
When developers write code, they often don’t think at all about security. In the security mindset we think “How safe is this?” and “How could this be attacked?” and “What happens when things go wrong?”

Being good at software security requires a lot of expertise. Great security comes with input from database experts, networking experts, sysadmins, developers, and every other piece of the application ecosystem. As each talks about the attack surfaces they understand, the developers gain valuable information about how to secure their code.

A great resource for learning about common weaknesses and their consequences is CWE. Many have heard of the “CWE/SANS Top 25” coding standard, which is the 25 most dangerous issues out of about 800 that they have currently listed. These items help us get in the security mindset because they list weaknesses in terms of technical impact meaning what bad thing can happen to me if I leave this weakness in the code. Technical impact includes things like unwanted code execution, data exposure and denial-of-service.

Each CWE items lays out clearly why you need to worry about it. When someone tells me they don’t think a particular weakness in their code matters, I usually have them Google the name of the error like “Uncaught exception” and “CWE” and then go to the relevant CWE to show them how dangerous it can be. This is “Scared Straight” for programmers.

Thinking about security leads to secure code.

Not required
The lack of a security mindset comes from not having security as a serious requirement, or even not a requirement at all. We have to start making security part of the standard requirements for all software, and measuring it in a consistent meaningful way.

There are those who say that security isn’t a quality issue, but they’re wrong. I see their point, that security requires specialized thinking and training, but in the end it is absolutely a quality issue. If When you get hacked the first thing in peoples mind is that you’ve got poor quality.

A great thing to do is add software security requirements to your development plan. That way everyone knows what to do, and expects that it will be scheduled properly and tested. If you’ve never done security before add 3 simple requirements:

  • Secure coding standard such as CWE Top 25 or OWASP Top 10
  • Security peer code review
  • Security testing such as penetration testing

It won’t cover everything and it won’t be perfect, but it’ll get you started on a very solid foundation.

You get what you ask for. Ask for security.

Test-it-in mentality
Testing is important, in fact it’s critical. But we have learned for over 50 years that testing does not improve quality, it simply measures it. The old adage “You can’t test quality into a product” is equally true for software security, namely “You can’t test security into a product”.

When you’re trying to improve something like quality or security (remember, security is a quality issue) you have to make sure that you being at the beginning. Quality and security must pervade the development process. It may seem old at this point, but Deming’s 14 points is still chock full of useful effective advice. Especially point 3:

Cease dependence on inspection to achieve quality [security]. Eliminate the need for inspection on a mass basis by building quality [security] into the product in the first place.

All too often organizations are creating a security group (good idea) and only empowering them to test at the end of the cycle (bad idea). If you want your security group to be effective, they’ve got to get at the root causes behind the vulnerabilities in your software. When they find something during testing, chase it upstream, eliminate the root cause, and then eliminate all other instances of the same problem, rather than just the one you were lucky enough to find during testing.

Testing alone will not secure your software. On ounce of prevention is worth a pound of cure.

Practical suggestions

  1. Remember to focus both internally as well as externally. Many of the current breaches are a hybrid of security and physical access. This is the hacker’s holy grail.
  2. Follow basic well-known security practices. If they’re not well-known to you, then start with training.
    • Control physical access
    • Train and monitor for social engineering, because it still works way too often. Just try it on your own people using a friend and see how far she can get.
    • Never ever use default passwords. Always reset anything you buy or install. If a vendor did it for you, check their work. I know of a cable provider that uses a template based on customers addresses. Probably most of their customers don’t realize their network is essentially wide-open.
    • Encrypt private data. These days you have to figure that data is going to get out at some point, so just encrypt anything you wouldn’t want to share. Passwords yes, but also email address, social security numbers, etc.
    • Monitor for suspicious traffic and data access. Many attacks you don’t hear about are stopped this way, because someone noticed something funny going on. In some of the recent breaches monitoring was reporting bad behavior for weeks or months but no one paid attention. One organization said “We sell hammers” when told about suspicious behavior.
  3. We must move to a more proactive approach. The current trend in static analysis is to find bugs and indeed many of the leading vendors have very fancy flavor-of-the-week approaches, (called signatures) which puts their software into the position of the same old, reactive, too-late problems of anti-virus. We must start building software that isn’t susceptible.

To be proactive, we have to train people in proper tools, processes, and techniques. Then formalize the use of that training in policies. Policies that include security best practices, requirements, and testing.

In static analysis we need to supplement the bug-finding with more preventative rules such as strict input validation rather than chasing potential tainted data. All data sources, even your own database, should be validated because otherwise what a great way to lay an easter egg. (Remember that security paranoid mindset from before?) Use prepared statements and strong validation and you can avoid getting yourself into the SQL Injection Hall of Shame.

We need to start looking for root problems rather than exploits. Take the Heartbleed problem. Despite claims to the contrary, the underlying issues were available from any serious static analysis tool that takes a preventative approach. What we didn’t have was a flavor-of-the-month static analysis rule that looked for the particular implementation. All we had was a root-cause best-practice rule not being used.

Root cause should have been enough. Weak code is weak code, whether or not we have a current exploit, which is all that the “signature” approach to security provides. The time it takes to find exploits and check their validity is not only nearly impossible from a coverage perspective (Just have a talk with Richard Bender if you don’t believe me) but is certainly more than the time to build hardened software.

That’s right, it’s both faster and easier to just code to a strict safe coding standard than it is to try and figure out that your code is safe, or chase defects from a weak unsafe application. Stop chasing bugs and work to build security in instead. Get yourself a good static analysis tool (Or use the SWAMP) and a copy of CWE and get started.

PC Security Software

Book Resources