Tag Archives: appsec

AutoSec Automotive CyberSecurity

parasoft car small
Last week with Alan Zeichick and I did a webinar for Parasoft on automotive cybersecurity. Now Alan thinks that cybersecurity is an odd term, especially as it applies to automotive and I mostly agree with him. But appsec is also pretty poorly fitted to automotive so maybe we should be calling it AutoSec. Feel free to chime-in using the comments below or on twitter.

I guess the point is that as cars get more complicated and get more “smart” parts and get more connected (The connected car) as part of the “internet of things”, you will start to see more and more automotive security breaches occurring. From taking over the car to stealing data to triggering airbags we’ve already had several high-profile incidents which you can see in my IoT Hall-of-Shame.

To help out we’ve put together a high-level overview of a 7-point plan to get you started. In the near future we’ll be diving into detail on each of these topics, including how standards can help you not only get quality but safety and security, the role of black-box, pen-test, and DAST as well as how to get ahead of the curve and harden your vehicle software using static code analysis (SAST) and hybrid testing (IAST).

The webinar was recorded for your convenience, so be sure and check it out. If you have automotive software topics that are near and dear to your heart, but sure to let me know in the comments or on Twitter or Facebook.

In the meantime, for more security info check out the security resources page and a few of these books can help.
Embedded Systems Security: Practical Methods for Safe and Secure Software and Systems Development,

Platform Embedded Security Technology Revealed: Safeguarding the Future of Computing with Intel Embedded Security and Management Engine,

Software Test Attacks to Break Mobile and Embedded Devices (Chapman & Hall/CRC Innovations in Software Engineering and Software Development Series)

IoT Security – A Contradiction in Terms

A collage of various devices that not only can be hacked, but already have been.
A collage of various devices that not only can be hacked, but already have been.

The internet of things aka IoT has become the internet of hacks. More and more devices are being internet enabled. While this makes many aspects of our lives easier it opens us up to a wide range of cybersecurity problems. From direct control of devices to lost of personal private data to actual control of the networks and computers in our homes and offices, the IoT is creating a security risk at a faster rate than it’s fixing them.

Vendors are driven to get items to market fast in order to make money. Along the way security is given short shrift, or all-too-often not even considered. After all, it’s only a light bulb, what’s the worst that could happen? The answer of course is a lot, and probably much more than you think.

Compounding this problem is the fact that consumer simply don’t like doing sysadmin work and maintenance on their hardware. It’s difficult enough to convince people to update their computers and mobile devices. Worse than that are things like keeping routers up-to-date. Way down everyone’s list of things to do is monitor all the smart devices in the house for CVEs (known vulnerabilities) in the national vulnerability database. Hardware manufacturers have to take this into account and put even more care into the software security for software embedded in internet enabled things.

Just for giggles in a scary sort of way, here’s a brief partial list of a few devices that have known hacks available for them. If this doesn’t scare you then you’re not thinking about it enough. You should be running screaming to empty your bank account, buy an old pre-70s car, and smash your phones, thermostats, and other electronic devices.

Fitbit health bracelet,
Baby monitors,
VOIP phones,
road signs,
cctv cameras,
USB-C port,
gas station tank gauges,
Blu-Ray discs,
light bulbs,
CD players,
electricity smart meters,
SD cards,
mag stripe readers

Again, this list is only a (very) small subset of things that not only CAN be hacked but already have been. I may have to create an IoT Hall-of-Shame for this stuff to see if we can get better security going.

The scary thing is that many of these aren’t just access to the device itself, or even data from the device (which is already a huge privacy issue) but are gateways to attack other pieces of your network. Read more about the lightbulb and blu-ray hacks above.

Now the answer to all this isn’t easy, but I’m hoping that at least you’ll spend more time thinking about it than you have.

[Update 2015-11-24 – added link to Hall-of-Shame]
FYI – I just finally created a new Hall-of-Shame for IoT – you can view it at the IoT Hall-of-Shame.


[Update 2015-11-23 – added resources list]

Halloween Security Slashers Webinar

Halloween themed software security webinar
Halloween themed software security webinar

I’m doing a Halloween themed Parasoft webinar this Friday on Stopping Software Security Slashers with Static Analysis. As always it’s a free webinar and you can register here.

We like to have fun at these holiday webinars, so we’ll investigate how some security issues are similar to the famous horror movie villains you know and love, like Jason, Freddy, Leatherface, Michael and Norman. I hope to see you there.


Stagefright, Heartbleed, and other grisly-sounding software defects are scary for good reason: they make applications vulnerable to menacing cyberattackers—no hockey mask or knife-fingered glove required. In the absence of an adequate defect prevention strategy, your application is likely to stumble as malicious (and even not so malicious) hackers bear down on vulnerabilities, potentially crashing the software or exposing sensitive data. If your software is deployed to a medical device, automotive system, or any other safety-critical application, this is only the beginning of the nightmare.

But your application deployment doesn’t have to end in gruesome horror. By implementing quality practices, such as a static analysis, throughout the SDLC, you reduce the potential attack surface cyberattackers can exploit. Moreover, by automating the continuous application of defect prevention technologies, you eliminate the possibility of defects recurring like a chainsaw-wielding maniac that won’t stay down.

In this webinar, we’ll look at why recently publicized defects are so scary and discuss how to take a proactive approach to ensuring the safety, security, and reliability of your applications. We’ll focus on how to leverage standards, such as OWASP, PCI DSS, and CWE, to evolve development policies from static analysis findings so that your application isn’t the next victim.


Why We View AppSec Vulnerabilities As False

Firefighters in the lobby, false alarm, 4am. I’d like to say a few things as a follow-up to my article on theoretical appsec vulnerabilities last week. The article generated some interesting conversation about the idea. Jeff Williams seemed to asking what I thought was an interesting question, namely why do people push back on security analysis results? His take is that accuracy and false-alarms are the big reason and that tool vendors need to do better.

So let’s take a look at it. I’ve put up a new poll that takes a look at this question, so feel free to pop over there and let me know what you think. I’ll write about the results in the near future. In a previous poll about static analysis results in general, the largest answer was training, followed up by false positives and management.

In no particular order, here are some reasons why people justify ignoring the results from their security scanners.


Noise, noise, noise! However you call it, whether it’s false positives, false alarms, theoretical problems, too many messages or anything else, noise is a big pain. Jeff says accuracy matters and indeed it does. But in my experience developers are awfully quick to pull the “noise” trigger. It would be fun to do a poll/quiz with a couple of real code samples and see what developers think. The basic application security quiz at the AppSec Notes blog but doesn’t expose the knowledge issues around recognizing real security vulnerabilities in actual code. In other words, would you recognize why a piece of code is dangerous if you saw it? I’d like to see the quiz that has both good code and bad code examples and the question “security problem or ok”. This is trickier than it sounds at first because you can’t just have bad code and fixed code, you need bad code and then suspicious code that would likely raise a false alarm. Now there’s a real test. If anyone knows of such a beast or wants to build one, make sure to let me know.

Noise can be made even worse if you have false objectives and metrics. For example if you base success on “number of issues found” you’re likely to find the greatest number of unimportant issues. There are better definitions of success discussed in the presentation AppSec Broken Window Theory that is worth reading. One quick example is to track success by the number of “flaws not found” in new applications.

One other point on noise, particularly in the area of what people like to call false positives. I have the “Curmudgeon Law of Static Analysis Tool Usefulness”. The more clever a static analysis tool is, the more likely it’s output will be perceived as a false positive. This is because it’s easy to accept a result you understand, but if a tool tries to tell you something you don’t know (IE the most useful result) you are least likely to accept it. That’s why I keep pushing on a) better presentation of results to explain WHY it matters and b) better training.

Bad Workflow or Process

This is closely related to the noise problem. If you have a bad workflow, process, or configuration you will find your security scanning painful. Sometimes this pain is noise, sometimes it’s extra effort, slow work or other symptoms. For example I’d seen people scan static analysis on legacy code where the corporate policy was “no changes unless there is a field-reported bug”. Scanning code you don’t intend to fix or looking for items you don’t intend to fix certainly contributes to noise, effort required and costs.

The takeaway is make sure your tools are configured optimally and the way they integrate into your process is not causing headaches. I could go on a long time on this topic alone but we’ll save that for another day. How can you tell if it’s causing headaches? Ask those who are using the tools and those who are required to address the output of the tools.

Another great read on this topic is Does Progress Come From Security Products or Process where Gunnar Peterson says “Process engineering is deeply unsexy. It is about as far removed from the glamour, fashion show world of security conferences as you can possibly imagine, but its where the actual changes occur.


Again this is a category closely related to noise. Noise is a symptom of a lack of prioritization in your appsec findings. Ideally you’re storing all the output from your all the security tools in your arsenal in an intelligent data-driven system that will help you determine which items are most important. No one wants to spend weeks fixing something that is unlikely to happen and even if it did the consequences are minimal.

AppSec prioritization must take risk into account. Will this happen? Does this happen today? How hard is it to exploit? How hard is it to prevent? Plus all the other usual risk management questions. You know what to do, if you have too much noise from your tools, you need to put some intelligence behind it to get to what matters.

Note that when I say prioritization I don’t necessarily mean triage. Triage implies a human driven process (see bottleneck) that makes painful tradeoffs in a short time span rather than an orderly thought-out process. In the Broken Windows presentation I mentioned earlier Erik Peterson says “Triage != Application Security Program” and I wholeheartedly agree.

This is why I’m a fan of the attempt to develop comprehensive risk application frameworks. Some of these are the Common Weakness Scoring System (CWSS) that is managed at Mitre. CWSS proposes a way to properly weigh security findings in the context of your application, which should enable better automated prioritization. This goes hand-in-hand with the Common Weakness Risk Analysis Framework (CWRAF).

I know they’re far from perfect and in fact are frequently way too complicated and painful to use. Research in this area will ensure continued improvement and I expect it to ultimately become the critical driver in the appsec scanning space.

Test Security In

We’ve all heard the adage that you “can’t testing quality into an application”. This is based on Deming’s 3rd principle “cease dependence on inspection to achieve quality.” Early one many disagreed with Deming but today it’s an accepted truth that inspection alone will not solve your problems. Yet somehow in AppSec many believe that we can “test security into an application.” This attitude is easy to recognize because it results in security processes that are completely orthogonal to development, and are highlighted by all security activities being post-development QA-like activities. That method didn’t work for quality and won’t work for security.

In order to get ahead of the software security issues we have to start building secure software in the first place. There’s a great site sponsored by the US government called Build security in and it has lots of great information to get you going. Also take a look at the Building Security In Maturity Model. I realize that it’s easier to say “build it in” than it is to actually do and that building it in brings it’s own challenges. But like manufacturing and quality, it’s the only way to get long-term sustainable improvement.


The SANS institute did a survey on application security programs and practices and found

“Although suppliers continue to improve the speed and accuracy of SAST tools and make them easier to use, developers need security training or expert help to understand what the tools are telling them, which vulnerabilities are important, why they need to be fixed and how to fix them.”

The report notes that only 26% of organizations had secure coding training that was working well or mandated. Further it reads

“A lack of knowledge and skills is holding back Appsec programs today, and it is preventing organizations from making real progress in Appsec in the future. The number one obstacle to success reported in this year’s survey is a shortage of skilled people, part of a bigger problem facing the IT security industry in general, as recent studies by Forrester Research13 and (ISC)214 show.

Training and education are needed to address this skills shortage—not just training more Infosec and Appsec specialists, but training developers and managers, too. Fewer than one-quarter of respondents have training programs that are ongoing and working well, and secure coding training ranks low in the list of practices that organizations depend on in their Appsec programs today. This needs to change. ”

OWASP talks about the importance of training

“From information security perspective, the holistic approach toward application security should include for example security training for software developers as well as security officers and managers…”

A survey in 2010 showed that security training was not being done and I suspect little has changed today.

“Nearly 80% of personnel at government agencies and contractors said their organization does not provide sufficient training and guidance for software security application development and delivery, according to a new survey.”


Tools alone and tool accuracy won’t fully answer the problems of AppSec. again from the Sans survey

“There aren’t any next generation tools or other silver bullets on the horizon that will solve the problem of secure software. Writing secure software is about fundamentals: thoughtful design, careful coding, disciplined testing and informed and responsible management. The sooner that organizations understand this—and start doing it—the sooner they will solve their security problems. ”

False alarms and accuracy are a huge problem. So is the lack of training. I wouldn’t want to argue that one is more important than the other, because I could easily find myself arguing both sides. Both are important. Until we have a perfect security tool, meaning no false alarms and equally important no false negatives training will play a necessary integral role to application security (appsec) and software security (swsec).