Journalism Matters

I know it’s crazy for a blogger to care about this, but I do care about journalism. Once upon a time I worked for a daily newspaper, and I really liked it. While there is a lot of good that comes out of putting power in the people’s hands, there is no excuse for poor journalism, which pervades traditional media as well as the internet community.

So I just jointed the Matter project at Kickstarter. It’s supposed to help improve journalism. Take a look for yourself and see what you think.

Useless Software Predictions for 2012

crystal ball © by jerebu
It seems like everyone who is anyone is creating a list of software predictions for the software industry for 2012. I decided why not jump on the bandwagon and have some fun. If nothing else, it’s a good chance to rant. So without further ado, here’s my two cents on 2012.

Extensive cross-browser testing will become a necessity

In 2012, you can no longer get away with testing browser-based applications on one or two browsers. Not so long ago, an organization creating an internal application could declare that they were targeting a specific version of Windows, and testers could ignore everything else with impunity. Even if the application was accessed via a browser, it was pretty safe to assume that if the organization had Windows desktops, testing could focus on IE 6 or 7.

Now, more and more applications are accessed from a browser, and there’s no way to predict which browser people will have on their desktop. For a commercial product, you probably need to support (and test on):

  • Firefox on Windows, Linux, and Mac
  • A couple versions of IE on a couple versions of Windows
  • Safari on OS X and iOS
  • Chrome
  • Opera

The only good news on this front: At least you won’t have to test on IE 6 since Microsoft is actively driving it into extinction.

Mobile devices are another huge issue. As a tester, you need to define what you’re supposed to care about. If you’ve got a reason to skip something, make sure your test plan not only documents what you are testing, but also what you’re NOT testing (and why). For example, if you think Firefox is important, get that documented in your test plan.

And then there’s tablets. No matter what you think of Apple, there’s no denying that iPads are everywhere. If there’s any way to access your application from an iPad and you’re not testing it there, don’t be surprised to see a growing number of reported bugs (or lost users) in the upcoming year. The same goes for other tablets that are gaining momentum in the marketplace.

Testing for touch interfaces will need to be done

A finger can move in ways that simply are not possible for a mouse. With a mouse, if you want to move the cursor from point A to point C, you must move through point B. With a finger, you can go directly from A to C without ever passing through point B. A mouse can never be in two places at once, but a touch display often receives multiple simultaneous touches moving in different directions. You also have directional gestures, different lengths of touches, and so on. Web sites that depend on some form of hover for instruction or navigation don’t really translate to a touch interface.

What does this mean for testers? Until now, many teams have been able to squeak by with very extensive testing on the desktop, then some cursory checking on a touch interface to confirm that nothing horrible happens. As more and more people start using touch interfaces—often as their primary computing device—this is going to become more and more of an issue. Test plans will need to address this.

Cloud adoption will continue, but not necessarily help

Super simple guess here. So easy it’s not actually a prediction. People will continue to migrate things to the cloud. At some point in the future it’s possible this trend will reverse, similar to outsourcing. The cloud introduces new problems and complexity while solving others. Some will find that the cloud helped, others that it didn’t really change things (SPDS – Same Problem Different Server) while still others will find that the cloud was for them a huge mistake. Try not to be one of those – make sure that the reason you’re going to cloud aligns with what the cloud can actually do for you. Beware of those who say it will cure all problems.

The Arms Race in PC performance is over for the desktop

Probably this happened even earlier, but I’m making it official. The desktop performance arms race is over! Hurray! No longer do you need to buy a machine worrying that the next one out in 3 months is so much faster that you wish you had waited. Truthfully for most users outside of high-performance gamers, the machines available today are more than fast enough.

If you don’t believe me, look at the tablets and phones out there. They are much lower performance than the desktop, and yet still more than good enough for most people. One thing good about the mobile devices is that with the limited resources there the developers have been more careful to date. As mobile devices grow in capability, the apps will decrease in performance, because programmers will become sloppy as they have on the desktop.

One last long-term prediction on this – most desktop computers will be history within the next 10 years. They’re already approaching the point of being ridiculous.

Siri + Kinect

Ok, this one is not going to happen. Not this year, not ever. But imagine computing devices capable of the voice input that Siri has along with the controllerless gestures and other inputs that Kinect is capable of. I want one, now! Where do I buy it?

Well don’t expect Microsoft (MSFT) and Apple (AAPL) to team up, but don’t be surprised when we finally do start seeing those two concepts married in the same devices. It WILL happen, it’s just a question of when. Unfortunately, probably not for a couple of years. Someone please prove me wrong.

Some companies will miss the boat on tablets

By this I don’t mean that someone will make crummy tablets – that’s already happened more times than anyone would have expected. I mean from a business perspective companies don’t realize that tablets are necessary both for your staff as well as your customers. Handicapping your staff or your customers results in lost business. More than one company will let that happen to them this year.

In other words, if you don’t have a tablet strategy for your IT department – get cracking. If you don’t have one to make sure your customers that have tablets can access your application/web then hurry up and fix it, because it’s already costing you money. For example, if you have a web site that lets customers make online payments, but Flash gets in the way, people will start looking for other vendors. They are already doing this.

And if you think you can simply re-size your application to fit on a smaller screen, think again. Mobile is fundamentally different – read the section on touch interfaces above.

Flash will die

It’s already officially dead on mobile, in that Adobe has said they’re no longer supporting it. How much farther behind can the desktop be? It’s inevitable that Adobe will not bother with the desktop at some point. Probably NOT this year, but in the not too distant future. If you have a Flash app today you should start looking at HTML5. If you’re looking at new development for heaven’s sake don’t use Flash!

For more software predictions, check out my Parasoft blog.

Getting the Static out of your Analysis

Day 11 / 365 - Touching static © by xJason.Rogersx
The other day I was talking to a colleague about setting up static analysis properly. I was on my usual soapbox about all the simple steps you can take up front to insure success, and he suggested that I should blog about it, so here it is – How to properly configure your static analysis from the beginning.

This presumes that you’ve already gone through the trouble of selecting the proper tool for the job. And you’ve setup policies and procedures for how and when to use it. What I want to focus on is getting the rule set right.

As I’ve mentioned before there are quite a few ways to have troubles with static analysis, and noise is one of the chief culprits. You want need to make sure that noise is far out-weighed by valuable information.

Noise comes about primarily from having the wrong rules or running on the wrong code. It sounds simple enough, but what does it mean?

The Wrong Rules

Let’s break the wrong rules down into the various ways they can trouble you. First, noisy rules. If a rule makes a lot of noise or gives actual false positives, you need to turn it off. False positives in pattern-based static analysis are indicative of improperly implemented rules, and should be fixed if possible, turned off it not.

On the other hand, false positives in flow analysis are inevitable and need to weighed against the value they provide. If the time spent chasing the false positives is less than the effort that will be needed to find the bugs otherwise, then it’s probably worth it. If not, turn the rule off.

But there are other kinds of noise. Some rules may technically be correct, but not helpful. For example rules that are highly context dependent. Such rules in the right context are very helpful but elsewhere are far more painful than they are worth. If you have to spend a lot of time evaluating whether a particular violation of a rule should be fixed in this particular case, you should turn the rule off. I’ve seen a lot of static analysis rules in the wild that belong in this category. Nice academic ideas, impractical in the real world.

Another way to recognize a noisy rule is that it produces a lot of violations. If you find that one rule is producing thousands of violations, it’s probably not a good choice. Truthfully, it’s either an inherently bad rule, or it’s just one that doesn’t fit your teams coding style. If you internally agreed with what the rule says to do, you won’t have thousands of violations. Each team has to figure out exactly what the threshold for too many is, but pick a number and turn off rules that are above it, at least in the beginning. You can always revisit it later when you’ve got everything else done.

One area of rules that frequently gets called noise is when developers either don’t understand the value of a rule, or simply disagree with it. In the beginning such rules can undermine your long-term adoption. Turn off rules that are contentious. How can you tell? Run the results past the developers – if they complain, you’ve probably got a problem.

It’s important that the initial rule set is useful and achievable. Later on you will be able to enhance the rules as you come into compliance and as the practice itself becomes ingrained. In the beginning, you need your static analysis tool to deliver high value. I ask myself, would I hold off shipping this code if I found this error? If the answer is yes, I leave the rule on, if not, it’s a poor candidate for an initial rule set. This is what some people call the “Severity 1” rules or high priority rules.

The Wrong Code

Sometimes particular pieces of code are noisy. Typically this ends up being legacy code. The best practice is to let your legacy policy guide you. If you have a rule that legacy code should only be modified when there is a field reported bug, then you don’t want to run static analysis on it. If you have a policy that when a file is touched it should come into compliance with static analysis rules, then use that. Otherwise skip it. Don’t bother testing any code that you don’t plan on fixing.

Checking the configuration

Once you have a set of rules you think is right, you should validate that. Run on a piece of real code and look at the number and quality of violations coming out. If the violations are questionable, or there are too many, go back to square one and weed them out. If they’re reasonable, pick another project, preferably from a different team or area, and run the test again. You’ll likely find a rule or two that you need to turn off when you do this. After that, you’ve got a pretty reasonable chance of having a good initial rule set.

Long term

In the long term, you should be adding to your rules. You want to resist the temptation to get all the rules in the initial set, because you’ll get run over with too much work. Plus as developers get used to having a static analysis tool advise them, they start improving the quality of the code the produce, so it’s natural to ratchet it up from time-to-time.

When you see that a project or team has either gotten close to 100% compliance with your rules, or if they’ve essentially plateaued, then you want to think about adding rules. The best choice is not to simply add rules that sound good, but rather choose rules that have a relationship to problems you’re actually experiencing. If you have performance issue, look at that category for rules to add, and so on.

Spending the time upfront to make sure your rules are quiet and useful will go a long way to insuring your long term success with static analysis.

[Disclaimer]
As a reminder, I work for Parasoft, a company that among other things make static analysis tools. This is however my personal blog, and everything said here is my personal opinion and in no way the view or opinion of Parasoft or possibly anyone else.
[/Disclaimer]

Resources

The Ins and Outs of Opting and Privacy

There has been another rash of security and privacy issues by major internet companies. Actually it’s more of an ongoing issue than it is a recent outbreak. And much of the ongoing trouble is related to a poor understanding of “opt in” vs “opt out” methodologies, and some pretty poor business choices in that area.

Keep Out © by Aaron Jacobs

Google (GOOG) just announced that wireless network owners can no “opt out” from its Wi-Fi geolocation map database. Many have greeted this as good news and responsible behavior on Google’s behalf. Others, myself included, view this as a classic case of a business doing essentially nothing to change it’s behavior, and then promoting the non-effort as a valuable security benefit to their customers and the world at large. Google believes that once you’re using any of their services, you’ve essentially opted in to anything they want to do. More on that in a minute.

Another consumer favorite, Facebook appears to be tracking 90 days of everything it’s users (and some suggest even former users) browse on the web. This is beyond just tracking what you’re doing inside Facebook itself. And there are also allegations over whether or not they actually are storing profile information about people who have not even joined Facebook. This is another company that believes in a policy of opting you in to anything they want and then letting you opt back out. They know that a lot of people aren’t savvy enough to understand, others too lazy, and others will never even be aware of the issues.

Verizon (VZ) tracks everything you do with your phone, so do pretty much all the cell phone companies. Recently Verizon started allowing people to opt out. Josh Constine at TechCrunch mentions that at least they don’t call it “Greater Choice” like Google does. But his take is everyone is saying “Why can’t we be evil too?”

Strangely enough, AT&T (ATT) takes the opposite move of letting people opt in. Pretty ironic for a company who’s logo resemble the Death Star, but commendable.

The problem with “opt out” is that it works well outside of privacy areas. It also works in areas where you have an explicit relationship. For example, if I create a Google account it will keep track of what I search, unless I opt out. Most companies that have web accounts work in this way, for example with their email lists. This is a very reasonable method – you contacted me, so you don’t mind if I contact you. You see this normally as little “send me your junk email” boxes. You can judge the company based on whether the boxes are clicked or empty by default on their sign-up forms.

The stakes for things like this are low – the worst case is that some web site sends me a bunch of junk email, and if they’re a responsible company, they’ll respond to my “stop that” request.

The difference with privacy issues is that the stakes are much higher, and the awareness is much lower. If someone decides that by using their website I agree to let them track my every move on the web, it’s unlikely that I’ll ever figure it out. And they may end up being privy to something I didn’t want to share with them. Opting people in by default to such things is unethical behavior at best. What’s the rational connection between me using your website and me giving you permission to spy on all my web activities? There is none of course.

In the case of the Google Wi-Fi mapping they’re collecting your data whether or not you have a relationship with them. This is one step worse than the Facebook issue. In this case they’re literally driving the streets of the world looking for Wi-Fi (we used to call this warchalking) and then adding you to a database. You may not even be aware they’re collecting your data. In fact, the odds that homeowners ARE aware are extremely small. And yet they’re using on opt out methodology, just to cover their butts. Which essentially means that they’re opting you in to something, without your permission, without your awareness. And they justify that because their company motto is “Do No Evil”.

The truth is that it’s a very questionable practice to collect someone’s information without their knowledge. If they want to build a database, then can simply switch to an opt in method. Instead of my changing my SSID if I happen to know that they might drive by someday, (which is inconvenient because I have to reset all the devices using my network, including frequent guests devices) they can go to a method where they only collect data from those who indicate willingness by changing the name. Instead of changing my SSID from “mynetwork” to “mynetwork_nomap” to opt out, I should be able to change to “mynetwork_map” to opt in. Anyone who doesn’t want it doesn’t have to do anything. Anyone who is unaware will not be unintentionally opted in. Anything less is not only unfriendly to consumers, it’s just plain evil.

Ranting about Software, Security and Tech