A big congratulations to Parasoft Virtualize for their Jolt award for the Virtualize product. As an initial disclaimer and reminder, I do work for Parasoft. With that in mind, I’d like to say a few things.
First, Parasoft has a pretty good history of getting Jolt awards. It’s one of the things I like about the job – constantly pushing the envelope for software development tools. I was looking in the cabinet downstairs and saw the award from Advanced Systems magazine in 1994 for Insure++ (then called Insight++) and remembered with pride how it showed that our runtime error detection constantly finds more bugs than other solutions, such as Purify. That was quite a few years ago and now it’s Virtualize’s turn.
Virtualize is a really cool product if you haven’t taken a look at it. But let me tell you in Dr Dobb’s words:
You need to test against a mainframe, five databases, and invoke nine services from your intranet. But these are all part of your production environment, and you have to wait weeks, or months, before you can get your app into yet another massive, integrated release. Isn’t there a better way?
Parasoft’s Virtualize and its Environment Manager can be your answer. Virtualize is a test tool that builds on Parasoft’s 25 years of experience in automated software testing. With Virtualize, your development and test teams can create complete virtual environments for testing applications against data sources and data sinks of almost any kind. Developers can create virtual components and develop against these interfaces until the real components are available.
But feel free to click through and read the rest of the comments. As Gary Evans from Dr Dobb’s puts it, “The possibilities are stunning.” For more about the award, checkout the Parasoft ALM blog.
I’m kind of surprised, or at least disappointed that we are still talking about SQL injection breaches. About a year ago I wrote about SQL Injection and yet it’s still hitting major web sites. For example Hackmageddon has an interesting chart of cyber attacks in June (not just SQL injection). But for me writing about SQL injection feels a bit like writing about planking, it’s so 2000-and-late.
SQL injection? Not sexy, but it sure is effective.
The latest entry in the SQL Injection Hall of Shame is Yahoo (YHOO). I’m sure you’ve heard by now about the hundreds of thousands of user email addresses and plain-text password that were stolen from Yahoo using SQL injection. There is much to be learned from this event.
SQL injection attacks are completely preventable
Using static analysis flow-analysis tools will not catch all security errors
The only place we should be seeing SQL injection attacks today is in the classroom, as IT professionals are being trained to prevent such attacks
If SQL injection is getting through your system, you have a process/infrastructure problem. Or maybe a testing problem, but really that’s secondary. I’d go so far as to say if any testing or penetration tools find a SQL injection, see answer number one, you have a process/infrastructure problem. Seriously, if you’re running flow analysis for example and it finds a SQL injection error, don’t just fix that one, address the underlying root problem, IE you’re using unvalidated user input, and fix it everywhere, not just the instances that static flow analysis found.
As long as organizations play hunting games to find SQL injection they will continue to be vulnerable and pay the price. If nothing else learn this, SQL injection as well as other input validation errors are totally preventable, if you’re having problems then you’re doing it wrong.
Catching ALL the errors
Really this concept is just an extension of the previous one. Namely, you need to look at the root cause for SQL injection and put in place a development and testing methodology that requires 100% compliance. In this case as in many other security issues, the root cause is the use of user input that hasn’t been validated. You need to code in such a way that you don’t leave any room for unvalidated input to be used, ever.
Now the trendy thing to do these days is use flow-analysis tools to find “bugs” in software. Disclaimer for those who haven’t figure it out yet, my day job is at Parasoft who is a maker of static and flow analysis tools among other things. The problem is that people frequently misunderstand what flow analysis can do, and then either don’t follow through with what it’s told them, or have a false sense of security that they’ve done all they can.
Both behaviors are incorrect and will lead to vulnerabilities in your software. It’s important to understand that flow analysis isn’t a thorough methodology. It’s somewhat random in that it’s attempting to determine possible paths through your code without human intervention. By it’s nature it will miss things. On the upside it will tell you something you didn’t know about your code without any human effort.
The right response to finding security vulnerabilities in flow analysis isn’t to simply fix the ones found, but rather to ask yourself what the underlying cause is, then create a policy drive programming approach that will negate the possibility of such conditions surviving your development and test efforts.
STOP storing plain text passwords – immediately! If people forget their passwords, do a reset. They don’t need the old one back, they couldn’t remember it anyway. And make sure you’re salting and putting strong encryption on the passwords. Perhaps the biggest mistake that Yahoo made was storing usernames and passwords in plain text.
When doing security, it’s not only safest to deal with the root case, it can save your bacon later on. Always promote best practices and you should be able to limit liability to actual loss at worst.
I’d like to say that hopefully this will be the last time I write about SQL injection, but I’m willing to bet that it’ll happen again to a big company before the end of the year. In fact, I wouldn’t even be surprised if it happened again before the end of this summer. Sometimes we never learn. Feel free to sound off with your observations in the comments section or you can reach my on Twitter, Facebook, Google+, etc.
The Microsoft Surface tablet finally surfaced yesterday. I was discussing it with my son and he said the whole thing reminded him of the movie The Sixth Sense, which Microsoft playing the Bruce Willis part. They’re walking around, wondering what’s going on, trying to solve people’s problems and don’t realize they’re dead. People’s computing problems are directly attributable to them, not being alive and aware of how the world now works.
So I looked at the press event and the product video, and I’m trying to figure out why people are excited. Ok, yes, there’s another tablet offering out there, and it’s from a company who in theory can go toe-to-toe with Apple in this space, or sell at a loss for a decade if not (see Xbox profits and marketshare). And it’s a Windows offering for those who’ve been wanting Windows on a tablet. That’s it. That’s the new stuff. The rest are things already out there. NPR has a summary of reviews. And or course there is the obligatory use of the term iPad Killer. I’d actually be careful with that one, it seems a good way to end up with a tablet nobody wants based on past experience. Remember Xoom, Playbook and Touchpad? They were all iPad killers.
“Microsoft has always mimicked other technologies, from graphical interfaces to Web browsing to financial software. In some cases, it did improve upon what it copied, but in general the company’s approach worked because it was based on an artificial monopoly. It was important for us as users to work with common files and formats, so Windows continued to dominate and we adopted its browser and related software.”
So let’s look at what we know so far, and what we don’t know.
The thickness and weight are basically an iPad. The screen size is 10.6″ with HD resolution. This is good news if you’re primarily watching videos. Not bad, but not groundbreaking. Bad news if you’re trying to get work done. Note that the larger diagonal with the 16:9 resolution results in a smaller vertical space than the iPad with the 4:3 resolution. And look at the kickstand – this is a horizontal device by nature. Less vertical space is not a great idea if you’re trying to get work done, like email, creating documents, etc.
Screen Dimensions (estimated based on diagonal)
For screen resolution we don’t have any actual numbers yet, but people are suggesting that the “HD” for the RT version vs the “Full HD” for the Pro version means that it’s 720p (1280×720) and 1080p (1920×1080). iPad 2 resolution is roughly 720p albeit a different aspect ratio at 1024-by-768. The new iPad resolution is much higher at 2048-by-1536.
As to the processors we know that the RT is an ARM and the Pro is an Intel I-5. Both have an unknown number of cores and clock rate. How much RAM is also unknown.
Storage is 32GB through 128GB. Mostly standard fare for tablets, although the 128 currently stands out. It may be matched by others in the not too distant future, but currently a lot of space. Remember though that this is for a full Windows installation, so free space may be less than what you expect.
Price, Performance, Battery life are all things we don’t know. Raw specs, currently unavailable, are going to be essentially meaningless. What will matter is how snappy it is. If apps start fast, switch fast, and are responsive, then it’s a plus. Anything less than that will be DOA. I’m presuming it’ll be fast enough. Microsoft said the price of the Windows 8 RT Surface will be comparable to the price of other consumer tablets, while the price of the beefier (and heftier) Windows 8 Pro Surface will rival the price of Ultrabooks. Nothing new here, unless you count a $1000+ tablet as something new. Maybe. Overpriced tablets are a dime a dozen. Remember the Motorola Xoom?
The Kickstand is only new in the sense that it’s built-in. Kickstands and a wide variety of other cases are widely available. Will the Kickstand work vertically like many iPad cases? It doesn’t seem to be that way. How will it work when you put it on your lap? It’s a great idea for a table-top, but wouldn’t work the way I frequently use my iPad.
The keyboard again isn’t a new device. There is a huge selection for tablets from cases that convert your machine to basically being a notebook all the way to standard Bluetooth keyboards. Some are good, some are crappy. Logitech for example makes an extremely thin one built into an iPad case. Bundling it is a definite plus, but nothing new. Some will argue that the keyboard is using latest greatest technology, but it’s still just a keyboard and therefore not revolutionary. Kinect based keyboardless input – now that would be a reason to buy!
Some are calling the keyboard the killer secret weapon. It’s funny while there are many keyboard options available for the iPad, how many do you see in the wild? Very few. The point of a tablet is often to leave the stuff behind. Of course, if your OS is not designed for a tablet, but requires keyboard and mouse, then you need the thing.
The cover – wait for it – comes in colors and has magnets! Wow, how 2011! Has anyone at Microsoft seen the old iPad 2 videos? OK, it has an accelerometer too. You can call it evolutionary, but again not revolutionary.
Applications – you can run your windows applications, at least on the Pro. This will undoubtedly be a reason for many to try the Surface out. The question will ultimately be whether the experience is good enough to last. Remember Microsoft built tablets that could run Windows applications over 10 years ago, and pretty much no one wanted them. A 10 year old capability is nothing new. The tablet touch experience is fundamentally different than a PC. it’s based on UI that doesn’t need keyboards and mice. I’m not sure Microsoft understands that.
And that leaves me with their new marketing video. It reminds me of the droid campaign in that it’s long on sizzle and short on function. It shows lots of pretty pictures the relate to how the device is built, but nothing about what it can do. Looks to me like they’re targeting Android fans. There are those who criticize Apple’s advertising because it shows apps more than the devices, but isn’t that the point? It’s not about the device, it’s about doing something, whether it’s watching a movie, writing the great American novel, chatting with your grandchildren, or killing time with Angry Birds. How well will it do those things is still the great unknown.
[Update] Forgot about the stylus, inevitably someone will complain so here it is. The Pro version has a stylus – can you say Palm? [/Update]
[Update] How did I miss this? Where is the internet connectivity? WiFi only?! I want my LTE! [/Update]
When you run a Java program you don’t execute it the way you do an executable written in languages like C/C++. The program must run inside the Java virtual machine or JVM, which interprets the compiled Java byte code and translates it to the local native instruction set. For example, to execute a Java program called HelloWorld you would type:
The amount of memory available to the application is known as the Java heap size and is controlled by passing the -Xms and -Xmx flags to the virtual machine. The Xms is the starting heap size, the Xmx is the maximumum heap size. On machines with resource contraints you can set them to a small starting size, and a max that is large enough to do what you want to do, like “-Xms128MB -Xmx512MB”. This will free up more memory for other applications, but does impact performance, as I’ll discuss more in just a moment.
If you don’t pass the flags, you get whatever the default settings are, typically something like 32MB for starting and 128MB for max – but it depends on your OS and the particular implementation of Java you’ve installed. For example if you want to set the max heap size to 256 GB you can type:
java -Xmx256gb HelloWorld
However, you can’t just put any number you want there and have it work. It is a known fact that 32-bit architectures limit the amount of memory any single process can allocate to 4GB. So why does the JVM limit the maximum heap size to something much less than 2GB?
When you try to run your JVM with a -Xmx2gb flag, you’ll get the following error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
This is a limitation of a 32-bit OS such as Windows XP. 32-bit processes can only use a maximum of 4GB memory address space. Windows further splits that into half by allocating 2GB to the kernel and 2GB to the application. The reason you can not hit the 2GB limit within the VM is because there is memory overhead that the VM and OS use for the process, hence you end up with a bit less than the actual 2GB limit. The solution would be to move to a 64-bit machine running a 64-bit OS, and a 64-bit JVM.
In tests I have found the overall max you can use with a 32-bit JVM is OS/JVM dependent. Typically on Windows it’s about 1696MB, Linux is about 1696MB, and on Solaris it’s about 3000MB.
I’ve gotten the chance to work with a large variety of java applications on a wide variety of machines as part of my day job at Parasoft and I’ve found some configurations that will make things run better.
On a server machine I recommend you set the min and the max to the same (upper) value. This has a couple of benefits. First, it’s generally faster because you don’t lose any time for the JVM resizing. In addition, in cases where you aren’t completely sure that the meory will be available, this let’s you know immediately that the program doesn’t have enough memory, because it will refuse to even start. You can test this by starting enough applications that you’re down to less than 1GB of free memory, then typing:
java -Xms1024m -Xmx1024m HelloWorld
and it will fail again with a heap error because it is unable to allocate the 1GB of memory we told it we needed. This is useful with servers because we avoid confusing out-of-memory error messages when we thought we had enough but didn’t.
For performance issues, max memory should be 500MB less then your total real memory on Windows and 200 MB on linux/unix. For example if your Windows machine has 1GB of ram, the max setting should be set to 512 MB.
An additional note on contiguous memory. The JVM handles it’s memory as a single contiguous block. This means that if you give the JVM different parameters for initial (-Xms) and max (-Xmx) then the size of the JVM may change dynamically while running. However, this has some frequently unexpected ramifications.
The first is performance. If you have -Xms256m -Xmx1024m then your JVM will start with
an initial 256MB. If you’re running along happily and then you need something more like 300MB, here’s what happens:
First the JVM allocates a completely new block of 300MB
Then there is a transfer (memcopy) of the existing memory
then the original 256MB block is released.
As you can imagine this can be an expensive operation. You may have expected that it would simply go find an available block of 44MB and allocate it, but because of the JVM needing contiguous memory, there is no other way to handle it. This means the expansion tends to be slow. This is fine on machines with limited memory, but on large machines you can achieve much better performance.
A second effect is that you can get unexpected crashes if the system is unable to make the
allocation for the new block. For example, let’s assume you have a machine with 1GB physical memory and 1GB of swap. You’re running “normally” using about 600MB of memory and want to start a Java app that uses a lot of memory. You give the JVM -Xms512m -Xmx1024m and it starts fine.
Obviously there is a bit of swapping going on, but you’re running and happy. Then at some point your app starts needing more memory, and it wants to use the full 1024m that you’ve told the JVM is available. To review, you are currently using 600MB (stuff) + 512MB (java app) out of 2048MB (total memory). Now if you want to allocate your JVM from 512 to 1024, you would think that you only need an additional 512, but that is incorrect.
What actually happens is that you need a new block of 1024 to be allocated, AFTER which
the original memory is freed. Typically when the new block is large than the OS has available, the JVM is unable to handle it cleanly and crashes. This is a perfect example of how a JVM can crash even though you thought you had enough memory. In this case, setting the
parameters to -Xms1024m -Xmx1024m on the same machine would have succeeded, even
though you might think it was less likely than the smaller initial allocation.
Because of this, I normally recommend that if you have performance issues, or unexplained
crashes, to try and figure out how much memory you really need, and set Xms and Xmx to the same size. For typical Java apps, this can be 256 or 512 or even less. For large intensive apps 1024 is a better choice.
If you’ve got other tips or more information about this, let me know in the comments or on twitter.