Tag Archives: privacy

Get Your Free WiFi From Elvis

man dressed like Elvis in front of Welcome to Las Vegas sign
Want some free WiFi?
Ah, the lure of free open WiFi! Who can resist? Avoid flakey signal from your smartphone, get faster access and avoid data usage caps. But there is no such thing as a free lunch. When Elvis offers you free WiFi it’s best to think twice, because when someone offers free WiFi it comes with a cost, usually your privacy and security.

It might be a coffee shop who expects you to buy coffee, or a hotel who wants you to stay there instead of down the street. Or maybe the hotel has decided they can additionally sell advertising to you while you’re using the “free” WiFi to make a little extra money. Like the Elvis impersonator you should know what you’re really getting into. If think you’re getting your picture taken with the real Elvis, then perhaps you deserve what you get, especially in cases where the provider is taking the role of the huckster and offering something for “free” (as in puppy) when the hidden cost is your privacy.

With open or free WiFi the risks are always there in the form of unknown others on the network. I have found as I travel that hotel WiFi for example is a constant source of machine probes and attacks. Luckily my computer is well configured and I see the attempts. In spite of that I take the paranoid view and have avoided and free WiFi for over a year, until last week that is.

I was at the IQPC sponsored ISO 26262 Functional Safety conference in Berlin speaking on automotive cybersecurity. The WiFi performance in Berlin was no worse than others both at the hotel I was staying at and the conference hotel. By which I mean that it’s aggressively mediocre at about 1.5 Mbps. This would be reasonable performance for a 2G cellular network, but seems slow for WiFi. Now the reason I’m using it is that the cellular speed I get when roaming around the world is even slower – about 128kbps. So here I am making poor security decisions based on slow network performance. There’s a lesson to be learned there and perhaps a whole article about how we make poor security decisions.

And this is where this hotel stands out different than others, at least hotels in the USA. The attacks didn’t immediately start as I’ve seen at others, for example the Hilton in Long Beach, CA. (Yes, I’m purposely shaming their insecure public WiFi) But after working for a few minutes several of my web connections started failing when they refreshed. There were complaints about needing to re-login to Outlook, Google and other apps that require authentication.

Hotel MITM 1 of 3 So I started poking by clicking the little lock icon in the URL and as it turns out they were failing because the certificate for https was suspicious.

Hotel MITM 2 of 3As you do in these situations, I took a look at the certificate by pressing the “show certificate” button. In this case the certificate was supposed to be for Office 365,MITM safe office.com but instead it was signed by… wait for it… the hotel!!! Essentially they were doing a man-in-the-middle (MITM) attack. This means they were pretending to be Microsoft by self-signing a root certificate and saying “Microsoft is who we say it is”.

Hotel MITM 3 of 3

Probably this was for some silly injection of advertising or some other annoying but not necessarily evil purpose. Remember Lenovo doing this on their computers recently? In that case it was widely published and got a cute media name “Superfish“.

For Superfish the purpose was to put ads into your browser. Lenovo pre-installed it on a bunch of their computers, presumably for some additional revenue. The problem is that once you break down the certificate trust chain with this kind of attack, you leave the user at great risk. Someone can steal their credentials and really spy on any supposedly secure communication they have. This is to say nothing of having extra ads put onto your computer.

For the record, self-signing root certificates is only acceptable in a development or testing situation. Putting untrusted certificates in the wild is dangerous since no one can rely on them. Worse yet is pretending to be a certificate authority and jumping in the middle of a transaction or communication that the users think is secure. Not only is this unethical, but it really should be illegal.

Lesson learned again… Don’t use free WiFi and always pay attention to your URL lock icon.

Security vs Security

There is currently a national debate going on in the United States about the challenges of security vs. security. Some are calling it privacy vs. security but that’s not the real issue as I’ll get to shortly. In the wake of the San Bernardino shooting, the FBI has gone to court and demanded via the All Writs Act that Apple create a special insecure version of the operating system for the iPhone (iOS).

Backdoor weakens security As in the naming of the core problem (privacy or security) we need to understand exactly what the FBI is asking for here. Many media outlets continue to report this as the FBI asking Apple to decrypt or unlock the phone. That is not what they’re asking, they are asking Apple to create a special version of the software, load it onto the phone, and then they will brute-force it. In the past Apple has cooperated with the government to retrieve customer data pursuant to a warrant, but they’ve never fundamentally weakened iPhone security at the request of the US or any other government.

I usually try to avoid blogging about political issues as well as national security ones, because they’re complicated and the state does indeed have a legitimate interest in security activities as long as they’re constitutionally supported. In this particular case the press has been all over the place and frequently misreporting the basic facts, so I feel it’s important to keep people informed to allow them ton make better decisions. On the other hand a lot of people who are also making wild claims about what the government is actually asking for. There are some very interesting issues at play in this particular drama – let’s take a look.

Articles such as the editorial today in NYT are written by those who are either ignorant of the technical details or are willfully misleading the public. They refer to “unlocking” the iPhone and “using the front door”. In both cases the phrases are designed to mislead the public by suggesting that there is no downside. A more accurate description would be that they’re asking Apple to make sure the front door is easy to break into. This of course wouldn’t sell well with the public.

As to the ramifications of the issue, The Verge noted:

The FBI has a locked phone and they want it to be unlocked. Getting there will mean building some dangerous tools and setting some troubling precedents, but that’s basically what the bureau wants.

Privacy vs Security

Privacy and securityThis issue has been framed in the media as privacy vs. security which is misleading.

The security vs privacy debate is an interesting and worthwhile topic and I’ll certainly dive in at a future date, but this issue with the FBI and Apple isn’t that issue at all. Make no mistake, this isn’t about the privacy of the data on a particular iPhone, it’s about the security of all iPhones. The government is specifically not asking for the data, they’re asking for Apple to make the phone less secure.

John Adams, a former head of security at Twitter noted today in the LA Times:

“They try to use the horrors of the world to erode civil liberties and privacy, but the greater good — having encryption, more privacy for more people — is always going to trump small isolated incidents.”

Encryption

Data encryptionSome have noted that FBI doesn’t want Apple’s encryption keys, which is true. What they want is for Apple to make it easier to brute-force the login.

“In practice, encryption isn’t usually defeated by cryptographic attacks anyway. Instead, it’s defeated by attacking something around the encryption: taking advantage of humans’ preference for picking bad passwords, tricking people into entering their passwords and then stealing them, that kind of thing. Accordingly, the FBI is asking for Apple’s assistance with the scheme’s weak spot—not the encryption itself but Apple-coded limits to the PIN input system.”

In other words, let’s take the weakest link in phone security and rather than make it stronger, let’s make it weaker. Again, this is my point that this IS about security, not privacy. When someone gets in your phone, they get everything – passwords, bank credentials, personal private info that can be leveraged, etc. Are we seriously arguing that that’s ok? Consider how many smartphones and identities are stolen every day.

Chances of dying in a road accident 1 in 25,000,000.

Chance of dying in a terrorist attack world-wide 1 in 9,300,000. If you live in the US the chances are even lower.

Chances of having your phone stolen are about 1 in 42

Chances of being a victim of some form of identity theft are about 1 in 14.

So if you’re worried about something that will actually happen, you should hope that Apple comes out on top in this case. Identity theft affects about 15 million Americans each year and smartphone theft affects about 3 million Americans each year.

Backdoors

Some have suggested they’ve asked for a backdoor. That is an interesting topic that we could spend hours on, just trying to define what a backdoor is. But for the moment, let’s just say that whether or not it’s a backdoor, it certainly weakens the security of the device, specifically to make it vulnerable to brute-force attacks.

There’s another way

OptionsLet’s begin with understanding that this is NOT about this particular phone or incident. The government has chewed the data to death on this one and doesn’t really expect to find anything on this phone. If it were just about this phone there are other ways.

First of all, this phone happened not to be a personal phone, but one owned by Farook’s employer. Ironically, the employer is a government agency. This agency had mobile device management (MDM) software but it was either not installed or not configured correctly. Had it been in use this whole issue would be moot. The MDM software would have allowed the county to access the phone and mandate a specific configuration that would meet their security needs.

Next another misstep – sometime in the first 24 hours after the incident the county decided to change the iCloud password associated with the phone. Had this not been done they could have taken the phone to a previously configured network for it, such as their home or office, and tried to do an iCloud auto-backup.

Using law enforcement mistakes as a reason to weaken phone security is a poor argument. The better one would be to make sure that law enforcement knows how to deal with phones, when to get manufacturers involved, etc. This request to re-write the firmware is so extreme that Apple said:

The Apple executive also made a point of saying that no other government—not even China or Russia—has ever asked what American prosecutors have asked the company to do this week.

Offers and advice on accessing the phone have come from many directions. For example noted cybersecurity John Macafee has offered to break in for free. He believes he can accomplish it within a couple weeks without fundamentally weakening the smartphone infrastructure.

Snowden came up with a novel suggestion called de-capping, which uses acid and lasers to read the chip directly.

These offers have not been accepted because the FBI isn’t all that interested in what’s on this phone. They believe the under-informed will be on their side in this case so they can set a precedent. The government claims this won’t set a precedent but of course it will. Already people are saying “Apple has cooperated before, why not now?”. The whole reason the government is going after this phone IS to create a precedent – they don’t really expect to find anything useful on the phone.

Others have noted that the ramifications of specifically building weaknesses into a device at the governments request will of course be requested in the future, both by the US as well as other governments, including those we may find objectionable. In fact as I wrote this it came out that the Justice Department is already requesting Apple for 12 other phones. At this point we can pretty much put the “no precedent” argument to bed.

There are those arguing this would help prevent an attack – note that this isn’t the position of the FBI, but some in the senate who are trying to kill encryption. This is specifically about having access to the phone after you have a warrant – this would not have prevented this attack.

It’s not about finding out who committed the attack, we know that as well. It’s not about finding out who they communicated with – that comes from email logs and phone logs and doesn’t require the phone. Really it’s just a red herring to allow the anti-encryption crowd to further their agenda. Under the guise of making us safer they’d make us all less safe, since real statistics show us that the chances of having identity theft or a stolen phone are MUCH MUCH higher than preventing any terror attack, as I’ve noted above.

Legal and International Impact

What's the Impact If we choose to go down the path that the FBI is demanding, we need to think about the ramifications of this approach. I’ll break down the personal security and privacy vs police interests in the near future. For the moment we can set them aside and focus on what the impact could be.

I have to ask why the government prescribing the “how” rather than the “what“? By this I mean that if they want the phone data, then the request should ask for it. Of course, they could go to others like Macaffee and at least try to get the phone opened. But it isn’t about the phone, it’s about the precedent. That’s why they’ve prescribed how they want Apple to respond. The bigger legal question will be whether the government actually has the right to force a software vendor to write a specific piece of software.

If the government succeeds in their request, what will it mean overall? Can the government go after other security features, eventually asking Apple to backdoor their encryption methods as well? Why just Apple, why not all other smartphone vendors as well? And your desktop computer too.

And if the US government can force this, what about foreign governments? Will our security policies end up being defined by oppressive regimes? Some say that it’s about human rights because of how oppressive regimes handle their spying.

I know we all hate hearing the slippery slope argument, but in this case there is actually very little upside in the FBI demand and a whole lot of downside.

An Unreported Security Vulnerability

There’s one more issue here that scares me as a security professional. Namely the exploit of loading a new version of an OS onto a locked phone. This is certainly a problematic issue from a device security perspective. I wonder if Apple will be plugging this hole in the near future?

There is some security around this method because the new software needs to be digitally signed by Apple, but this is certainly an attack surface that bad actors will be taking a more in-depth look at. What if this build of iOS gets out in the wild somehow? Why wouldn’t people try to steal it? Can people try to figure out how to push their own OTA onto a device? How hard is it to bypass the Apple signature?

I’m certain future versions of iOS will probably take into account everything learned during this case. As always we can expect jailbreaks to continue to get more difficult as Apple tightens their security.

What’s the answer

Again, I need to reiterate that I recognize the legitimate need of law enforcement to investigate crimes. But there is also a legitimate interest for the public in protecting their information, finances and property. We have to ask ourselves if making everyone’s phone less secure is the best way to achieve greater security?

[Update 2016-02-24 16:45 – More info available since this morning. There is an article today in the New York Times that discusses how Apple is apparently planning to fix the update security loophole as I mentioned above. Not surprising, since it’s definitely something a bad actor could also use.

There is also an article at TechDirt that explains how demanding that Apple circumvent security technology may actually be against the Communications Assistance for Law Enforcement Act. So stay tuned on that front as well. ]

[Update 2016-02-26 13:15 – San Bernardino Sheriff Jarron Burguan had an interview with NPR where he admitted there is probably nothing useful on the phone. He said:

I’ll be honest with you, I think that there is a reasonably good chance that there is nothing of any value on the phone

which pretty much shows you what I was saying – it’s not about the phone.]

[Update 2015-02-26 15:30 – Bloomberg news just reported a secret government memo that details how the government is trying to find ways around device encryption – despite wide reports that they’re just interested in those one phone and not in setting a precedent.]

Resources

Closing the Barn Door – Software Security

All the ways that hackers can get in
All the ways that hackers can get in
In the second part of my series on what we can do to contain and combat the recent rash of security breaches I’d like to focus the software development side. I’m going to layout some of the reasons why we’ve got such vulnerable software today and what we can do about it. Part one of this series discussed things you can do personally as a consumer to better protect yourself.

Let’s start with some of the most common reasons why we aren’t getting secure software. Here’s the short-list in no particular order:

  • Training
  • Security mindset
  • Not required
  • Test-it-in mentality

The list is actually very intertwined, but I’ll try to separate these issues out the best I can. I’m focusing primarily on software security, rather than network or physical. They’re just as important, but we seem to be doing a better job there than in the code itself.

Training
It seems obvious that training is critical, but in the software business nothing can be taken for granted. I’ll talk more about the myth of “software engineering” in the near future, but for now just remember that software is NOT engineering, not at most organizations. Sure, there are plenty of engineers who write software, but their engineering credentials are not for software development, and somehow they leave sound engineering practices at the door when they come to work.

Developers need to be trained in security. This means they need to understand the role of prevention by avoiding unsafe constructs and practices. They need to be able to spot ways in which their code can be vulnerable. They need to be more paranoid than they currently are. They need to know what standards and tools are out there and how to make the best use of them.

Recently I was at AppSec in Denver and had a discussion with a developer at a security company about input validation. Sadly, he was arguing that certain parts of the application were safe, because he personally hadn’t thought of a way to attack them. We MUST move past this kind of thinking, and training is where it starts.

You can’t do what you don’t know how to do.

Security mindset
When developers write code, they often don’t think at all about security. In the security mindset we think “How safe is this?” and “How could this be attacked?” and “What happens when things go wrong?”

Being good at software security requires a lot of expertise. Great security comes with input from database experts, networking experts, sysadmins, developers, and every other piece of the application ecosystem. As each talks about the attack surfaces they understand, the developers gain valuable information about how to secure their code.

A great resource for learning about common weaknesses and their consequences is CWE. Many have heard of the “CWE/SANS Top 25” coding standard, which is the 25 most dangerous issues out of about 800 that they have currently listed. These items help us get in the security mindset because they list weaknesses in terms of technical impact meaning what bad thing can happen to me if I leave this weakness in the code. Technical impact includes things like unwanted code execution, data exposure and denial-of-service.

Each CWE items lays out clearly why you need to worry about it. When someone tells me they don’t think a particular weakness in their code matters, I usually have them Google the name of the error like “Uncaught exception” and “CWE” and then go to the relevant CWE to show them how dangerous it can be. This is “Scared Straight” for programmers.

Thinking about security leads to secure code.

Not required
The lack of a security mindset comes from not having security as a serious requirement, or even not a requirement at all. We have to start making security part of the standard requirements for all software, and measuring it in a consistent meaningful way.

There are those who say that security isn’t a quality issue, but they’re wrong. I see their point, that security requires specialized thinking and training, but in the end it is absolutely a quality issue. If When you get hacked the first thing in peoples mind is that you’ve got poor quality.

A great thing to do is add software security requirements to your development plan. That way everyone knows what to do, and expects that it will be scheduled properly and tested. If you’ve never done security before add 3 simple requirements:

  • Secure coding standard such as CWE Top 25 or OWASP Top 10
  • Security peer code review
  • Security testing such as penetration testing

It won’t cover everything and it won’t be perfect, but it’ll get you started on a very solid foundation.

You get what you ask for. Ask for security.

Test-it-in mentality
Testing is important, in fact it’s critical. But we have learned for over 50 years that testing does not improve quality, it simply measures it. The old adage “You can’t test quality into a product” is equally true for software security, namely “You can’t test security into a product”.

When you’re trying to improve something like quality or security (remember, security is a quality issue) you have to make sure that you being at the beginning. Quality and security must pervade the development process. It may seem old at this point, but Deming’s 14 points is still chock full of useful effective advice. Especially point 3:

Cease dependence on inspection to achieve quality [security]. Eliminate the need for inspection on a mass basis by building quality [security] into the product in the first place.

All too often organizations are creating a security group (good idea) and only empowering them to test at the end of the cycle (bad idea). If you want your security group to be effective, they’ve got to get at the root causes behind the vulnerabilities in your software. When they find something during testing, chase it upstream, eliminate the root cause, and then eliminate all other instances of the same problem, rather than just the one you were lucky enough to find during testing.

Testing alone will not secure your software. On ounce of prevention is worth a pound of cure.

Practical suggestions

  1. Remember to focus both internally as well as externally. Many of the current breaches are a hybrid of security and physical access. This is the hacker’s holy grail.
  2. Follow basic well-known security practices. If they’re not well-known to you, then start with training.
    • Control physical access
    • Train and monitor for social engineering, because it still works way too often. Just try it on your own people using a friend and see how far she can get.
    • Never ever use default passwords. Always reset anything you buy or install. If a vendor did it for you, check their work. I know of a cable provider that uses a template based on customers addresses. Probably most of their customers don’t realize their network is essentially wide-open.
    • Encrypt private data. These days you have to figure that data is going to get out at some point, so just encrypt anything you wouldn’t want to share. Passwords yes, but also email address, social security numbers, etc.
    • Monitor for suspicious traffic and data access. Many attacks you don’t hear about are stopped this way, because someone noticed something funny going on. In some of the recent breaches monitoring was reporting bad behavior for weeks or months but no one paid attention. One organization said “We sell hammers” when told about suspicious behavior.
  3. We must move to a more proactive approach. The current trend in static analysis is to find bugs and indeed many of the leading vendors have very fancy flavor-of-the-week approaches, (called signatures) which puts their software into the position of the same old, reactive, too-late problems of anti-virus. We must start building software that isn’t susceptible.

To be proactive, we have to train people in proper tools, processes, and techniques. Then formalize the use of that training in policies. Policies that include security best practices, requirements, and testing.

In static analysis we need to supplement the bug-finding with more preventative rules such as strict input validation rather than chasing potential tainted data. All data sources, even your own database, should be validated because otherwise what a great way to lay an easter egg. (Remember that security paranoid mindset from before?) Use prepared statements and strong validation and you can avoid getting yourself into the SQL Injection Hall of Shame.

We need to start looking for root problems rather than exploits. Take the Heartbleed problem. Despite claims to the contrary, the underlying issues were available from any serious static analysis tool that takes a preventative approach. What we didn’t have was a flavor-of-the-month static analysis rule that looked for the particular implementation. All we had was a root-cause best-practice rule not being used.

Root cause should have been enough. Weak code is weak code, whether or not we have a current exploit, which is all that the “signature” approach to security provides. The time it takes to find exploits and check their validity is not only nearly impossible from a coverage perspective (Just have a talk with Richard Bender if you don’t believe me) but is certainly more than the time to build hardened software.

That’s right, it’s both faster and easier to just code to a strict safe coding standard than it is to try and figure out that your code is safe, or chase defects from a weak unsafe application. Stop chasing bugs and work to build security in instead. Get yourself a good static analysis tool (Or use the SWAMP) and a copy of CWE and get started.

PC Security Software

Book Resources

Can the Internet Survive Privacy

Bear Threat © by Mrs. Gemstone
Lately some have been suggesting that the internet is at risk. Much if not all of the hoopla stems from a recent interview with Sergey Brin from Google (GOOG). Brin says the biggest threats come from government crackdowns, attempts to control piracy, and “the rise of ‘restrictive’ walled gardens such as Facebook and Apple, which tightly control what software can be released on their platforms.”

If you look at the arguments, they essentially break down to “If Google can’t spy on your every behavior, then the internet will collapse.” This is because all information in applications that aren’t web-based can’t be crawled by web crawlers, and user behavior inside the application also cannot be monitored.

It sounds pretty ridiculous, when you think about it. People have been using applications for years on the desktop. Some of them are local to the desktop, others reach out and use the cloud (what we used to call the net, before that the internet, before that it was the network). Applications were, and continue to be a combination of proprietary software, commercial software, freeware, and other open-source models. What applications have usually NOT been on the desktop is ad-supported.

Much of the web has evolved itself into an old broadcast style model, IE advertising supports content. I know some will argue that the web “changes everything”, but think about it. The idea of having to put up with adds to get your news fix is nothing new at all. This is an old argument, is it better to have “free” content supported by ads, or paid content without advertising. In the modern era, we go beyond simple advertising as well. In addition to the cost of having to look at ads, people are giving up their privacy and allow advertisers to monitor their behavior. The rationalization is that this is saving them some money.

Again, it’s an old argument that is not going to be settled here, and I suspect won’t be settled at all. I prefer a world where you can choose whether or not you want ads, and pay for the content you get, or deal with advertising. Let the consumer choose. Personally, I don’t mind paying for software and content, like Netflix over Hulu. I prefer that over dealing with ads, even before the whole privacy issue came into play. But others feel differently and I don’t have a problem with that as long as I’m not forced down the same path.

What Brin is really saying is “If Google can’t spy on you, then advertising breaks down, and without advertising, the internet breaks down.” I don’t buy it. At all. If suddenly all advertising centric services were forced to simply serve up ads without regard to my exact movements, it would definitely have an affect on the bottom line of those serving up the ads. But advertising would go on. Don’t believe me? Turn on your television… see any advertising? Do they know who you are? Do they know what channel you just watched? Do they know that you called your mom during the show? Nope, and they don’t care. Actually they DO care, they’d love to have that information about you. But in absence of having the information, life goes on.

Google tries to obfuscates the issue by saying they’re against “Walled Gardens”. Of course they never address the issue that all traditional computing is “walled” in the sense that Google has no idea what you’re doing. But somehow that’s OK, while if use the same software on a tablet, it means death for the Internet. Ridiculous. There is in fact a considerable disagreement over whether Google themselves have a walled garden.

What it really means is that if strong privacy protections are put in place, Google will have to change or it will collapse, because they have no edge in selling ads over anyone else. That I believe.