Tag Archives: Security

Security related info and issues

Software Safety Keynote EuroSPI 2016

I was honored this week to have the opportunity to present a keynote session at EuroSPI 2016. The title of my presentation was “Software Safety and Security Through Standards” and I discussed one of my favorite soapboxes. That is the idea that software development is often less disciplined than it should be, but it doesn’t have to be. We can and should develop software as an engineering discipline.

One of the key ways to start down this path is to implement coding standards properly. Too many are trying to use coding standards late in the process as a way to find bugs, rather than a way to flag improper methods of coding early on. While the former is cool, the latter is far more valuable.

The adage that “you can’t test quality in a product” is well known, but for some reason in software we think that you can indeed test quality into an application. The same goes for application security, perhaps even doubly so.

In order to break out of the current cycle of code, deploy, fix, redeploy we have to start doing things differently. We have to build a more mature software development process and static code analysis is the way to build upon the body of knowledge and best practices available.

Slides are below. Let me know if you have comments, questions, suggestions. And thanks to everyone at EuroSPI and ASQ for putting on a great conference and allowing me to participate. These are great organizations to get involved with if you’re serious about software quality. I encourage you to check them out.

Hacking: Medical Devices

Hospital buildingYou have control over your own body, right? Well, scary scenarios in the healthcare industry are increasing in awareness. In the past, with the growth of technology, hacking was just for computers, but now it is expanding to other devices including medical ones. This is not technically “cyber crime”, but can easily turn into it when it falls into the wrong hands so I’m going to cover it anyways.

Internet of Things (IoT): “refers to scenarios where network connectivity and computing capability extends to objects, sensors and everyday items not normally considered computers, allowing these devices to generate, exchange and consume data with minimal human intervention. There is, however, no single, universal definition” (Internet Society, 2015).

The IoT is an important aspect in the healthcare industry (recently the term Internet of Healthcare Things IoHT was coined by medical field personnel). Examples include; heart rate monitors, pacemakers, medicine drips, MRI, etc. all that connect to the Internet and record information. As most of us know, objects that are connected to the Internet or have computer-type technology can be hacked. One example of this was two men in Austria hacked their morphine pump while admitted to the hospital to boost the dosage (Sarvestani, 2014). This resulted in one going into respiratory arrest and both men becoming addicted to morphine (Sarvestani, 2014). They were able to achieve this by retrieving the machine’s control codes online, this information typically can be found in the device manuals that are online for user reference.Hospira LifeCare PCA pump

A more streamlined, dangerous version of the morphine pump hack is what is known as MEDJACK. MEDJACK is a “medical device hijack” (Carman, 2015). How is this done? Don’t these hospitals have firewalls and preventative measures for stuff like this? Yes and no. While the network itself and it’s computers are protected with firewall and other security the devices themselves are not secured. According to Ashley Carman at SC Magazine “attackers maneuver though healthcare systems’ main networks by initially exploiting outdated and unpatched medical devices, such as an X-ray scanner or blood gas analyzer. They build backdoors into the systems through these internet-connected devices” (2015).

Another way that this is done is through a tool known as Shodan that is “used to scan open ports on the internet is often used by security researchers to uncover critical exposed infrastructure that should be better protected” (Murdock, 2016). According to a Kaspersky researcher in Jason Murdock’s article “[Shodan] can find out about the hardware and software connected [to the internet] and if you know, for example, what feedback an MRI or laser or cardiology device gives when you connect to its port, you can go to Shodan and find hundreds of these devices and if you know a vulnerability you can hack all of them” (2016).

istan medical mannequinUnfortunately, it gets worse. Pacemakers, including ones that are fully installed, are now on the list of hackable equipment. Students at University of South Alabama hacked into iStan, a simulated human being device (Storm, 2015). IStan has “internal robotics that mimic human cardiovascular, respiratory and neurological systems. When iStan bleeds, his blood pressure, heart rate and other clinical signs change automatically.” iStan, which is used by USA’s College of Nursing, breaths, bleeds from two locations, cries, secretes bodily fluids, speaks, groans, wheezes, gags, gasps, coughs and mumbles” (Storm, 2015) allowing it to fully respond as a human being. These students hacked into the iStan and were able to launch a brute force attack and denial of service (DoS) attacks which interfered with the devices ability to function, which in turn “killed” iStan (Storm, 2015). Another source discussing pacemaker hacking is Tarun Wadhwa on Forbes. Wadhwa discussed how pacemakers are vulnerable:

“Implanted devices have been around for decades, but only in the last few years have these devices become virtually accessible.  While they allow for doctors to collect valuable data, many of these devices were distributed without any type of encryption or defensive mechanisms in place.  Unlike a regular electronic device that can be loaded with new firmware, medical devices are embedded inside the body and require surgery for “full” updates.  One of the greatest constraints to adding additional security features is the very limited amount of battery power available” (2012)

Thankfully though, there has been no recorded incident of intended harm to another individual (and a very small amount of incidents of harm to oneself) through medical device hacking. The basics? If you can, do some research into the devices being used in your hospital room to see what vulnerabilities are available on the web (through how-to’s, videos, device manuals, etc.) and if at all possible, stay healthy to avoid the hospital- I wish this for everyone!

(THIS POST IS NOT INTENDED TO INDUCE FEAR, ANGER, OR ANY OTHER EMOTION TOWARDS MEDICAL PERSONNEL, STAFF, HOSPITALS, IT STAFF, EQUIPMENT DEVELOPMENT, OR OTHER GROUP OF INDIVIDUALS HANDLING, PRODUCING, USING, UPDATING, OR INVOLVED IN MEDICAL DEVICES)

[Editors note: Maybe it SHOULD though… induce fear that is. -The Code Curmudgeon]

References:

Carman, A. (2014, June 4). ‘MEDJACK’ tactic allows cyber criminals to enter healthcare networks undetected. SC Magazine. Retrieved from http://www.scmagazine.com/trapx-profiles-medjack-threat/article/418811/

Internet Society. (2015, October). The Internet of Things: An overview. InternetSociety.org. Retrieved from https://www.internetsociety.org/sites/default/files/ISOC-IoT-Overview-20151014_0.pdf

Murdock, J. (2016, February 15). How a security researcher easily hacked a hospital and its medical devices. International Business Times. Retrieved from http://www.ibtimes.co.uk/ho w-security-researcher-easily-hacked-hospital-its-medical-devices-1544002

Sarvestani, A. (2014, August 15). Hospital patient hacks his own morphine pump. MassDevice.com On Call. Retrieved from http://www.massdevice.com/hospital-patient-hacks-his-own-morphine-pump-massdevicecom-call/

Storm, D. (2015, September 8). Researchers hack a pacemaker, kill a man(nequin). Computer World. Retrieved from http://www.computerworld.com/article/2981527/cybercri me-hacking/researchers-hack-a-pacemaker-kill-a-man-nequin.html

Wadhwa, T. (2012, December 6). Yes, you can hack a pacemaker (and other medical devices too). Forbes. Retrieved from http://www.forbes.com/sites/singularity/2012/12/06/yes-you-can-hack-a-pacemaker-and-other-medical-devices-too/#5ab6b78313e0

Get Your Free WiFi From Elvis

man dressed like Elvis in front of Welcome to Las Vegas sign
Want some free WiFi?
Ah, the lure of free open WiFi! Who can resist? Avoid flakey signal from your smartphone, get faster access and avoid data usage caps. But there is no such thing as a free lunch. When Elvis offers you free WiFi it’s best to think twice, because when someone offers free WiFi it comes with a cost, usually your privacy and security.

It might be a coffee shop who expects you to buy coffee, or a hotel who wants you to stay there instead of down the street. Or maybe the hotel has decided they can additionally sell advertising to you while you’re using the “free” WiFi to make a little extra money. Like the Elvis impersonator you should know what you’re really getting into. If think you’re getting your picture taken with the real Elvis, then perhaps you deserve what you get, especially in cases where the provider is taking the role of the huckster and offering something for “free” (as in puppy) when the hidden cost is your privacy.

With open or free WiFi the risks are always there in the form of unknown others on the network. I have found as I travel that hotel WiFi for example is a constant source of machine probes and attacks. Luckily my computer is well configured and I see the attempts. In spite of that I take the paranoid view and have avoided and free WiFi for over a year, until last week that is.

I was at the IQPC sponsored ISO 26262 Functional Safety conference in Berlin speaking on automotive cybersecurity. The WiFi performance in Berlin was no worse than others both at the hotel I was staying at and the conference hotel. By which I mean that it’s aggressively mediocre at about 1.5 Mbps. This would be reasonable performance for a 2G cellular network, but seems slow for WiFi. Now the reason I’m using it is that the cellular speed I get when roaming around the world is even slower – about 128kbps. So here I am making poor security decisions based on slow network performance. There’s a lesson to be learned there and perhaps a whole article about how we make poor security decisions.

And this is where this hotel stands out different than others, at least hotels in the USA. The attacks didn’t immediately start as I’ve seen at others, for example the Hilton in Long Beach, CA. (Yes, I’m purposely shaming their insecure public WiFi) But after working for a few minutes several of my web connections started failing when they refreshed. There were complaints about needing to re-login to Outlook, Google and other apps that require authentication.

Hotel MITM 1 of 3 So I started poking by clicking the little lock icon in the URL and as it turns out they were failing because the certificate for https was suspicious.

Hotel MITM 2 of 3As you do in these situations, I took a look at the certificate by pressing the “show certificate” button. In this case the certificate was supposed to be for Office 365,MITM safe office.com but instead it was signed by… wait for it… the hotel!!! Essentially they were doing a man-in-the-middle (MITM) attack. This means they were pretending to be Microsoft by self-signing a root certificate and saying “Microsoft is who we say it is”.

Hotel MITM 3 of 3

Probably this was for some silly injection of advertising or some other annoying but not necessarily evil purpose. Remember Lenovo doing this on their computers recently? In that case it was widely published and got a cute media name “Superfish“.

For Superfish the purpose was to put ads into your browser. Lenovo pre-installed it on a bunch of their computers, presumably for some additional revenue. The problem is that once you break down the certificate trust chain with this kind of attack, you leave the user at great risk. Someone can steal their credentials and really spy on any supposedly secure communication they have. This is to say nothing of having extra ads put onto your computer.

For the record, self-signing root certificates is only acceptable in a development or testing situation. Putting untrusted certificates in the wild is dangerous since no one can rely on them. Worse yet is pretending to be a certificate authority and jumping in the middle of a transaction or communication that the users think is secure. Not only is this unethical, but it really should be illegal.

Lesson learned again… Don’t use free WiFi and always pay attention to your URL lock icon.

Security vs Security

There is currently a national debate going on in the United States about the challenges of security vs. security. Some are calling it privacy vs. security but that’s not the real issue as I’ll get to shortly. In the wake of the San Bernardino shooting, the FBI has gone to court and demanded via the All Writs Act that Apple create a special insecure version of the operating system for the iPhone (iOS).

Backdoor weakens security As in the naming of the core problem (privacy or security) we need to understand exactly what the FBI is asking for here. Many media outlets continue to report this as the FBI asking Apple to decrypt or unlock the phone. That is not what they’re asking, they are asking Apple to create a special version of the software, load it onto the phone, and then they will brute-force it. In the past Apple has cooperated with the government to retrieve customer data pursuant to a warrant, but they’ve never fundamentally weakened iPhone security at the request of the US or any other government.

I usually try to avoid blogging about political issues as well as national security ones, because they’re complicated and the state does indeed have a legitimate interest in security activities as long as they’re constitutionally supported. In this particular case the press has been all over the place and frequently misreporting the basic facts, so I feel it’s important to keep people informed to allow them ton make better decisions. On the other hand a lot of people who are also making wild claims about what the government is actually asking for. There are some very interesting issues at play in this particular drama – let’s take a look.

Articles such as the editorial today in NYT are written by those who are either ignorant of the technical details or are willfully misleading the public. They refer to “unlocking” the iPhone and “using the front door”. In both cases the phrases are designed to mislead the public by suggesting that there is no downside. A more accurate description would be that they’re asking Apple to make sure the front door is easy to break into. This of course wouldn’t sell well with the public.

As to the ramifications of the issue, The Verge noted:

The FBI has a locked phone and they want it to be unlocked. Getting there will mean building some dangerous tools and setting some troubling precedents, but that’s basically what the bureau wants.

Privacy vs Security

Privacy and securityThis issue has been framed in the media as privacy vs. security which is misleading.

The security vs privacy debate is an interesting and worthwhile topic and I’ll certainly dive in at a future date, but this issue with the FBI and Apple isn’t that issue at all. Make no mistake, this isn’t about the privacy of the data on a particular iPhone, it’s about the security of all iPhones. The government is specifically not asking for the data, they’re asking for Apple to make the phone less secure.

John Adams, a former head of security at Twitter noted today in the LA Times:

“They try to use the horrors of the world to erode civil liberties and privacy, but the greater good — having encryption, more privacy for more people — is always going to trump small isolated incidents.”

Encryption

Data encryptionSome have noted that FBI doesn’t want Apple’s encryption keys, which is true. What they want is for Apple to make it easier to brute-force the login.

“In practice, encryption isn’t usually defeated by cryptographic attacks anyway. Instead, it’s defeated by attacking something around the encryption: taking advantage of humans’ preference for picking bad passwords, tricking people into entering their passwords and then stealing them, that kind of thing. Accordingly, the FBI is asking for Apple’s assistance with the scheme’s weak spot—not the encryption itself but Apple-coded limits to the PIN input system.”

In other words, let’s take the weakest link in phone security and rather than make it stronger, let’s make it weaker. Again, this is my point that this IS about security, not privacy. When someone gets in your phone, they get everything – passwords, bank credentials, personal private info that can be leveraged, etc. Are we seriously arguing that that’s ok? Consider how many smartphones and identities are stolen every day.

Chances of dying in a road accident 1 in 25,000,000.

Chance of dying in a terrorist attack world-wide 1 in 9,300,000. If you live in the US the chances are even lower.

Chances of having your phone stolen are about 1 in 42

Chances of being a victim of some form of identity theft are about 1 in 14.

So if you’re worried about something that will actually happen, you should hope that Apple comes out on top in this case. Identity theft affects about 15 million Americans each year and smartphone theft affects about 3 million Americans each year.

Backdoors

Some have suggested they’ve asked for a backdoor. That is an interesting topic that we could spend hours on, just trying to define what a backdoor is. But for the moment, let’s just say that whether or not it’s a backdoor, it certainly weakens the security of the device, specifically to make it vulnerable to brute-force attacks.

There’s another way

OptionsLet’s begin with understanding that this is NOT about this particular phone or incident. The government has chewed the data to death on this one and doesn’t really expect to find anything on this phone. If it were just about this phone there are other ways.

First of all, this phone happened not to be a personal phone, but one owned by Farook’s employer. Ironically, the employer is a government agency. This agency had mobile device management (MDM) software but it was either not installed or not configured correctly. Had it been in use this whole issue would be moot. The MDM software would have allowed the county to access the phone and mandate a specific configuration that would meet their security needs.

Next another misstep – sometime in the first 24 hours after the incident the county decided to change the iCloud password associated with the phone. Had this not been done they could have taken the phone to a previously configured network for it, such as their home or office, and tried to do an iCloud auto-backup.

Using law enforcement mistakes as a reason to weaken phone security is a poor argument. The better one would be to make sure that law enforcement knows how to deal with phones, when to get manufacturers involved, etc. This request to re-write the firmware is so extreme that Apple said:

The Apple executive also made a point of saying that no other government—not even China or Russia—has ever asked what American prosecutors have asked the company to do this week.

Offers and advice on accessing the phone have come from many directions. For example noted cybersecurity John Macafee has offered to break in for free. He believes he can accomplish it within a couple weeks without fundamentally weakening the smartphone infrastructure.

Snowden came up with a novel suggestion called de-capping, which uses acid and lasers to read the chip directly.

These offers have not been accepted because the FBI isn’t all that interested in what’s on this phone. They believe the under-informed will be on their side in this case so they can set a precedent. The government claims this won’t set a precedent but of course it will. Already people are saying “Apple has cooperated before, why not now?”. The whole reason the government is going after this phone IS to create a precedent – they don’t really expect to find anything useful on the phone.

Others have noted that the ramifications of specifically building weaknesses into a device at the governments request will of course be requested in the future, both by the US as well as other governments, including those we may find objectionable. In fact as I wrote this it came out that the Justice Department is already requesting Apple for 12 other phones. At this point we can pretty much put the “no precedent” argument to bed.

There are those arguing this would help prevent an attack – note that this isn’t the position of the FBI, but some in the senate who are trying to kill encryption. This is specifically about having access to the phone after you have a warrant – this would not have prevented this attack.

It’s not about finding out who committed the attack, we know that as well. It’s not about finding out who they communicated with – that comes from email logs and phone logs and doesn’t require the phone. Really it’s just a red herring to allow the anti-encryption crowd to further their agenda. Under the guise of making us safer they’d make us all less safe, since real statistics show us that the chances of having identity theft or a stolen phone are MUCH MUCH higher than preventing any terror attack, as I’ve noted above.

Legal and International Impact

What's the Impact If we choose to go down the path that the FBI is demanding, we need to think about the ramifications of this approach. I’ll break down the personal security and privacy vs police interests in the near future. For the moment we can set them aside and focus on what the impact could be.

I have to ask why the government prescribing the “how” rather than the “what“? By this I mean that if they want the phone data, then the request should ask for it. Of course, they could go to others like Macaffee and at least try to get the phone opened. But it isn’t about the phone, it’s about the precedent. That’s why they’ve prescribed how they want Apple to respond. The bigger legal question will be whether the government actually has the right to force a software vendor to write a specific piece of software.

If the government succeeds in their request, what will it mean overall? Can the government go after other security features, eventually asking Apple to backdoor their encryption methods as well? Why just Apple, why not all other smartphone vendors as well? And your desktop computer too.

And if the US government can force this, what about foreign governments? Will our security policies end up being defined by oppressive regimes? Some say that it’s about human rights because of how oppressive regimes handle their spying.

I know we all hate hearing the slippery slope argument, but in this case there is actually very little upside in the FBI demand and a whole lot of downside.

An Unreported Security Vulnerability

There’s one more issue here that scares me as a security professional. Namely the exploit of loading a new version of an OS onto a locked phone. This is certainly a problematic issue from a device security perspective. I wonder if Apple will be plugging this hole in the near future?

There is some security around this method because the new software needs to be digitally signed by Apple, but this is certainly an attack surface that bad actors will be taking a more in-depth look at. What if this build of iOS gets out in the wild somehow? Why wouldn’t people try to steal it? Can people try to figure out how to push their own OTA onto a device? How hard is it to bypass the Apple signature?

I’m certain future versions of iOS will probably take into account everything learned during this case. As always we can expect jailbreaks to continue to get more difficult as Apple tightens their security.

What’s the answer

Again, I need to reiterate that I recognize the legitimate need of law enforcement to investigate crimes. But there is also a legitimate interest for the public in protecting their information, finances and property. We have to ask ourselves if making everyone’s phone less secure is the best way to achieve greater security?

[Update 2016-02-24 16:45 – More info available since this morning. There is an article today in the New York Times that discusses how Apple is apparently planning to fix the update security loophole as I mentioned above. Not surprising, since it’s definitely something a bad actor could also use.

There is also an article at TechDirt that explains how demanding that Apple circumvent security technology may actually be against the Communications Assistance for Law Enforcement Act. So stay tuned on that front as well. ]

[Update 2016-02-26 13:15 – San Bernardino Sheriff Jarron Burguan had an interview with NPR where he admitted there is probably nothing useful on the phone. He said:

I’ll be honest with you, I think that there is a reasonably good chance that there is nothing of any value on the phone

which pretty much shows you what I was saying – it’s not about the phone.]

[Update 2015-02-26 15:30 – Bloomberg news just reported a secret government memo that details how the government is trying to find ways around device encryption – despite wide reports that they’re just interested in those one phone and not in setting a precedent.]

Resources