Tag Archives: Security

Security related info and issues

Security vs Security

There is currently a national debate going on in the United States about the challenges of security vs. security. Some are calling it privacy vs. security but that’s not the real issue as I’ll get to shortly. In the wake of the San Bernardino shooting, the FBI has gone to court and demanded via the All Writs Act that Apple create a special insecure version of the operating system for the iPhone (iOS).

Backdoor weakens security As in the naming of the core problem (privacy or security) we need to understand exactly what the FBI is asking for here. Many media outlets continue to report this as the FBI asking Apple to decrypt or unlock the phone. That is not what they’re asking, they are asking Apple to create a special version of the software, load it onto the phone, and then they will brute-force it. In the past Apple has cooperated with the government to retrieve customer data pursuant to a warrant, but they’ve never fundamentally weakened iPhone security at the request of the US or any other government.

I usually try to avoid blogging about political issues as well as national security ones, because they’re complicated and the state does indeed have a legitimate interest in security activities as long as they’re constitutionally supported. In this particular case the press has been all over the place and frequently misreporting the basic facts, so I feel it’s important to keep people informed to allow them ton make better decisions. On the other hand a lot of people who are also making wild claims about what the government is actually asking for. There are some very interesting issues at play in this particular drama – let’s take a look.

Articles such as the editorial today in NYT are written by those who are either ignorant of the technical details or are willfully misleading the public. They refer to “unlocking” the iPhone and “using the front door”. In both cases the phrases are designed to mislead the public by suggesting that there is no downside. A more accurate description would be that they’re asking Apple to make sure the front door is easy to break into. This of course wouldn’t sell well with the public.

As to the ramifications of the issue, The Verge noted:

The FBI has a locked phone and they want it to be unlocked. Getting there will mean building some dangerous tools and setting some troubling precedents, but that’s basically what the bureau wants.

Privacy vs Security

Privacy and securityThis issue has been framed in the media as privacy vs. security which is misleading.

The security vs privacy debate is an interesting and worthwhile topic and I’ll certainly dive in at a future date, but this issue with the FBI and Apple isn’t that issue at all. Make no mistake, this isn’t about the privacy of the data on a particular iPhone, it’s about the security of all iPhones. The government is specifically not asking for the data, they’re asking for Apple to make the phone less secure.

John Adams, a former head of security at Twitter noted today in the LA Times:

“They try to use the horrors of the world to erode civil liberties and privacy, but the greater good — having encryption, more privacy for more people — is always going to trump small isolated incidents.”

Encryption

Data encryptionSome have noted that FBI doesn’t want Apple’s encryption keys, which is true. What they want is for Apple to make it easier to brute-force the login.

“In practice, encryption isn’t usually defeated by cryptographic attacks anyway. Instead, it’s defeated by attacking something around the encryption: taking advantage of humans’ preference for picking bad passwords, tricking people into entering their passwords and then stealing them, that kind of thing. Accordingly, the FBI is asking for Apple’s assistance with the scheme’s weak spot—not the encryption itself but Apple-coded limits to the PIN input system.”

In other words, let’s take the weakest link in phone security and rather than make it stronger, let’s make it weaker. Again, this is my point that this IS about security, not privacy. When someone gets in your phone, they get everything – passwords, bank credentials, personal private info that can be leveraged, etc. Are we seriously arguing that that’s ok? Consider how many smartphones and identities are stolen every day.

Chances of dying in a road accident 1 in 25,000,000.

Chance of dying in a terrorist attack world-wide 1 in 9,300,000. If you live in the US the chances are even lower.

Chances of having your phone stolen are about 1 in 42

Chances of being a victim of some form of identity theft are about 1 in 14.

So if you’re worried about something that will actually happen, you should hope that Apple comes out on top in this case. Identity theft affects about 15 million Americans each year and smartphone theft affects about 3 million Americans each year.

Backdoors

Some have suggested they’ve asked for a backdoor. That is an interesting topic that we could spend hours on, just trying to define what a backdoor is. But for the moment, let’s just say that whether or not it’s a backdoor, it certainly weakens the security of the device, specifically to make it vulnerable to brute-force attacks.

There’s another way

OptionsLet’s begin with understanding that this is NOT about this particular phone or incident. The government has chewed the data to death on this one and doesn’t really expect to find anything on this phone. If it were just about this phone there are other ways.

First of all, this phone happened not to be a personal phone, but one owned by Farook’s employer. Ironically, the employer is a government agency. This agency had mobile device management (MDM) software but it was either not installed or not configured correctly. Had it been in use this whole issue would be moot. The MDM software would have allowed the county to access the phone and mandate a specific configuration that would meet their security needs.

Next another misstep – sometime in the first 24 hours after the incident the county decided to change the iCloud password associated with the phone. Had this not been done they could have taken the phone to a previously configured network for it, such as their home or office, and tried to do an iCloud auto-backup.

Using law enforcement mistakes as a reason to weaken phone security is a poor argument. The better one would be to make sure that law enforcement knows how to deal with phones, when to get manufacturers involved, etc. This request to re-write the firmware is so extreme that Apple said:

The Apple executive also made a point of saying that no other government—not even China or Russia—has ever asked what American prosecutors have asked the company to do this week.

Offers and advice on accessing the phone have come from many directions. For example noted cybersecurity John Macafee has offered to break in for free. He believes he can accomplish it within a couple weeks without fundamentally weakening the smartphone infrastructure.

Snowden came up with a novel suggestion called de-capping, which uses acid and lasers to read the chip directly.

These offers have not been accepted because the FBI isn’t all that interested in what’s on this phone. They believe the under-informed will be on their side in this case so they can set a precedent. The government claims this won’t set a precedent but of course it will. Already people are saying “Apple has cooperated before, why not now?”. The whole reason the government is going after this phone IS to create a precedent – they don’t really expect to find anything useful on the phone.

Others have noted that the ramifications of specifically building weaknesses into a device at the governments request will of course be requested in the future, both by the US as well as other governments, including those we may find objectionable. In fact as I wrote this it came out that the Justice Department is already requesting Apple for 12 other phones. At this point we can pretty much put the “no precedent” argument to bed.

There are those arguing this would help prevent an attack – note that this isn’t the position of the FBI, but some in the senate who are trying to kill encryption. This is specifically about having access to the phone after you have a warrant – this would not have prevented this attack.

It’s not about finding out who committed the attack, we know that as well. It’s not about finding out who they communicated with – that comes from email logs and phone logs and doesn’t require the phone. Really it’s just a red herring to allow the anti-encryption crowd to further their agenda. Under the guise of making us safer they’d make us all less safe, since real statistics show us that the chances of having identity theft or a stolen phone are MUCH MUCH higher than preventing any terror attack, as I’ve noted above.

Legal and International Impact

What's the Impact If we choose to go down the path that the FBI is demanding, we need to think about the ramifications of this approach. I’ll break down the personal security and privacy vs police interests in the near future. For the moment we can set them aside and focus on what the impact could be.

I have to ask why the government prescribing the “how” rather than the “what“? By this I mean that if they want the phone data, then the request should ask for it. Of course, they could go to others like Macaffee and at least try to get the phone opened. But it isn’t about the phone, it’s about the precedent. That’s why they’ve prescribed how they want Apple to respond. The bigger legal question will be whether the government actually has the right to force a software vendor to write a specific piece of software.

If the government succeeds in their request, what will it mean overall? Can the government go after other security features, eventually asking Apple to backdoor their encryption methods as well? Why just Apple, why not all other smartphone vendors as well? And your desktop computer too.

And if the US government can force this, what about foreign governments? Will our security policies end up being defined by oppressive regimes? Some say that it’s about human rights because of how oppressive regimes handle their spying.

I know we all hate hearing the slippery slope argument, but in this case there is actually very little upside in the FBI demand and a whole lot of downside.

An Unreported Security Vulnerability

There’s one more issue here that scares me as a security professional. Namely the exploit of loading a new version of an OS onto a locked phone. This is certainly a problematic issue from a device security perspective. I wonder if Apple will be plugging this hole in the near future?

There is some security around this method because the new software needs to be digitally signed by Apple, but this is certainly an attack surface that bad actors will be taking a more in-depth look at. What if this build of iOS gets out in the wild somehow? Why wouldn’t people try to steal it? Can people try to figure out how to push their own OTA onto a device? How hard is it to bypass the Apple signature?

I’m certain future versions of iOS will probably take into account everything learned during this case. As always we can expect jailbreaks to continue to get more difficult as Apple tightens their security.

What’s the answer

Again, I need to reiterate that I recognize the legitimate need of law enforcement to investigate crimes. But there is also a legitimate interest for the public in protecting their information, finances and property. We have to ask ourselves if making everyone’s phone less secure is the best way to achieve greater security?

[Update 2016-02-24 16:45 – More info available since this morning. There is an article today in the New York Times that discusses how Apple is apparently planning to fix the update security loophole as I mentioned above. Not surprising, since it’s definitely something a bad actor could also use.

There is also an article at TechDirt that explains how demanding that Apple circumvent security technology may actually be against the Communications Assistance for Law Enforcement Act. So stay tuned on that front as well. ]

[Update 2016-02-26 13:15 – San Bernardino Sheriff Jarron Burguan had an interview with NPR where he admitted there is probably nothing useful on the phone. He said:

I’ll be honest with you, I think that there is a reasonably good chance that there is nothing of any value on the phone

which pretty much shows you what I was saying – it’s not about the phone.]

[Update 2015-02-26 15:30 – Bloomberg news just reported a secret government memo that details how the government is trying to find ways around device encryption – despite wide reports that they’re just interested in those one phone and not in setting a precedent.]

Resources

Closing the Barn Door – Software Security

All the ways that hackers can get in
All the ways that hackers can get in
In the second part of my series on what we can do to contain and combat the recent rash of security breaches I’d like to focus the software development side. I’m going to layout some of the reasons why we’ve got such vulnerable software today and what we can do about it. Part one of this series discussed things you can do personally as a consumer to better protect yourself.

Let’s start with some of the most common reasons why we aren’t getting secure software. Here’s the short-list in no particular order:

  • Training
  • Security mindset
  • Not required
  • Test-it-in mentality

The list is actually very intertwined, but I’ll try to separate these issues out the best I can. I’m focusing primarily on software security, rather than network or physical. They’re just as important, but we seem to be doing a better job there than in the code itself.

Training
It seems obvious that training is critical, but in the software business nothing can be taken for granted. I’ll talk more about the myth of “software engineering” in the near future, but for now just remember that software is NOT engineering, not at most organizations. Sure, there are plenty of engineers who write software, but their engineering credentials are not for software development, and somehow they leave sound engineering practices at the door when they come to work.

Developers need to be trained in security. This means they need to understand the role of prevention by avoiding unsafe constructs and practices. They need to be able to spot ways in which their code can be vulnerable. They need to be more paranoid than they currently are. They need to know what standards and tools are out there and how to make the best use of them.

Recently I was at AppSec in Denver and had a discussion with a developer at a security company about input validation. Sadly, he was arguing that certain parts of the application were safe, because he personally hadn’t thought of a way to attack them. We MUST move past this kind of thinking, and training is where it starts.

You can’t do what you don’t know how to do.

Security mindset
When developers write code, they often don’t think at all about security. In the security mindset we think “How safe is this?” and “How could this be attacked?” and “What happens when things go wrong?”

Being good at software security requires a lot of expertise. Great security comes with input from database experts, networking experts, sysadmins, developers, and every other piece of the application ecosystem. As each talks about the attack surfaces they understand, the developers gain valuable information about how to secure their code.

A great resource for learning about common weaknesses and their consequences is CWE. Many have heard of the “CWE/SANS Top 25” coding standard, which is the 25 most dangerous issues out of about 800 that they have currently listed. These items help us get in the security mindset because they list weaknesses in terms of technical impact meaning what bad thing can happen to me if I leave this weakness in the code. Technical impact includes things like unwanted code execution, data exposure and denial-of-service.

Each CWE items lays out clearly why you need to worry about it. When someone tells me they don’t think a particular weakness in their code matters, I usually have them Google the name of the error like “Uncaught exception” and “CWE” and then go to the relevant CWE to show them how dangerous it can be. This is “Scared Straight” for programmers.

Thinking about security leads to secure code.

Not required
The lack of a security mindset comes from not having security as a serious requirement, or even not a requirement at all. We have to start making security part of the standard requirements for all software, and measuring it in a consistent meaningful way.

There are those who say that security isn’t a quality issue, but they’re wrong. I see their point, that security requires specialized thinking and training, but in the end it is absolutely a quality issue. If When you get hacked the first thing in peoples mind is that you’ve got poor quality.

A great thing to do is add software security requirements to your development plan. That way everyone knows what to do, and expects that it will be scheduled properly and tested. If you’ve never done security before add 3 simple requirements:

  • Secure coding standard such as CWE Top 25 or OWASP Top 10
  • Security peer code review
  • Security testing such as penetration testing

It won’t cover everything and it won’t be perfect, but it’ll get you started on a very solid foundation.

You get what you ask for. Ask for security.

Test-it-in mentality
Testing is important, in fact it’s critical. But we have learned for over 50 years that testing does not improve quality, it simply measures it. The old adage “You can’t test quality into a product” is equally true for software security, namely “You can’t test security into a product”.

When you’re trying to improve something like quality or security (remember, security is a quality issue) you have to make sure that you being at the beginning. Quality and security must pervade the development process. It may seem old at this point, but Deming’s 14 points is still chock full of useful effective advice. Especially point 3:

Cease dependence on inspection to achieve quality [security]. Eliminate the need for inspection on a mass basis by building quality [security] into the product in the first place.

All too often organizations are creating a security group (good idea) and only empowering them to test at the end of the cycle (bad idea). If you want your security group to be effective, they’ve got to get at the root causes behind the vulnerabilities in your software. When they find something during testing, chase it upstream, eliminate the root cause, and then eliminate all other instances of the same problem, rather than just the one you were lucky enough to find during testing.

Testing alone will not secure your software. On ounce of prevention is worth a pound of cure.

Practical suggestions

  1. Remember to focus both internally as well as externally. Many of the current breaches are a hybrid of security and physical access. This is the hacker’s holy grail.
  2. Follow basic well-known security practices. If they’re not well-known to you, then start with training.
    • Control physical access
    • Train and monitor for social engineering, because it still works way too often. Just try it on your own people using a friend and see how far she can get.
    • Never ever use default passwords. Always reset anything you buy or install. If a vendor did it for you, check their work. I know of a cable provider that uses a template based on customers addresses. Probably most of their customers don’t realize their network is essentially wide-open.
    • Encrypt private data. These days you have to figure that data is going to get out at some point, so just encrypt anything you wouldn’t want to share. Passwords yes, but also email address, social security numbers, etc.
    • Monitor for suspicious traffic and data access. Many attacks you don’t hear about are stopped this way, because someone noticed something funny going on. In some of the recent breaches monitoring was reporting bad behavior for weeks or months but no one paid attention. One organization said “We sell hammers” when told about suspicious behavior.
  3. We must move to a more proactive approach. The current trend in static analysis is to find bugs and indeed many of the leading vendors have very fancy flavor-of-the-week approaches, (called signatures) which puts their software into the position of the same old, reactive, too-late problems of anti-virus. We must start building software that isn’t susceptible.

To be proactive, we have to train people in proper tools, processes, and techniques. Then formalize the use of that training in policies. Policies that include security best practices, requirements, and testing.

In static analysis we need to supplement the bug-finding with more preventative rules such as strict input validation rather than chasing potential tainted data. All data sources, even your own database, should be validated because otherwise what a great way to lay an easter egg. (Remember that security paranoid mindset from before?) Use prepared statements and strong validation and you can avoid getting yourself into the SQL Injection Hall of Shame.

We need to start looking for root problems rather than exploits. Take the Heartbleed problem. Despite claims to the contrary, the underlying issues were available from any serious static analysis tool that takes a preventative approach. What we didn’t have was a flavor-of-the-month static analysis rule that looked for the particular implementation. All we had was a root-cause best-practice rule not being used.

Root cause should have been enough. Weak code is weak code, whether or not we have a current exploit, which is all that the “signature” approach to security provides. The time it takes to find exploits and check their validity is not only nearly impossible from a coverage perspective (Just have a talk with Richard Bender if you don’t believe me) but is certainly more than the time to build hardened software.

That’s right, it’s both faster and easier to just code to a strict safe coding standard than it is to try and figure out that your code is safe, or chase defects from a weak unsafe application. Stop chasing bugs and work to build security in instead. Get yourself a good static analysis tool (Or use the SWAMP) and a copy of CWE and get started.

PC Security Software

Book Resources

Better Software Conference West

Better Software Conference West 2014
Better Software Conference West 2014
I’m heading to the Better Software Conference West in Las Vegas tomorrow. If you want to make your software better, this is the place to do it. Just bring your software along, and when you come home it’ll be all better… seriously!

Well, maybe not. But what you CAN do is come and learn lots of great things that will help you build better software. There are some great sessions planned as always, and an expo floor to talk to the companies that make the tools you need.

I have a session on Thursday at 2:15pm titled “Hardening Your Code in a Post-Heartbleed World: What Role Does Static Analysis Play?” where I’ll be talking about how to make the most out of static code analysis. If you want to move from a reactive position in cybersecurity to a proactive one, come learn how you can harden your code and prevent problems in the first place.

Hope to see you there.

SDLC Acceleration Summit

sdlc-acceleration-summit
As I’m sure you’ve heard by now, several software companies including Parasoft are sponsoring an SDLC Acceleration Summit in San Francisco on May 13th, 2014. It’s a great place to go if you would like to figure out how to produce better software more quickly. There are sessions ranging from tools to processes to infrastructure and security. Not to mention a great selection of top-notch speakers.

And if you’re a Code Curmudgeon fan, you can get a 50% discount. All you have to do is go to the registration page and put CodeCurmudgeonVIP in the box for promotion code.

For more about it, watch the video below. Hope to see you there.