Virtualisation can be a hard topic to wrap your head around, with its own arcane language. If you’re one of the many for whom the word ‘hypervisor’ conjures up images of a souped-up military helmet, let’s start with a recap.
Virtualisation, in its most basic form, involves running multiple independent operating systems on a single physical server. This is possible because physical servers typically use only a fraction of the physical resources on offer, like processing capacity, memory and storage. The rest goes to waste.
The hypervisor is a piece of software designed to end that problem, allocating resources between different operating systems, all running simultaneously but independently, so each of them has what it needs. The resulting efficiencies can be dramatic: ten physical servers can potentially run hundreds of virtual servers.
There’s a lot more to it, of course, but that will do for starters. What’s really interesting is not the mechanics but the implications. Once you stop thinking about IT resources in terms of physical equipment, all sorts of possibilities open up. Cloud computing, for example. If your server is virtual, it doesn’t matter whether it’s physically located downstairs, across the road, in another city or halfway across the world.
The shift in thinking is profound. “Strategically,” says Gartner, “server virtualisation is an IT modernisation catalyst that will change how IT is acquired, consumed, managed, sourced and paid for.”
That’s where things are headed in the long term. But for the moment, most organisations are still looking for savings and efficiencies, and they don’t have to go far to find them.
“We have one client who saved R700 000 over three years, and that’s not an unusual story,” says BCX business manager for Software and Consulting Services Grant Mufford. “The sources of saving are multiple. It’s not just using less hardware, it’s also less power and less storage because of new technologies.”
Then there’s the additional flexibility that comes with virtualisation.
“The agility is one source of the return on investment,” says Andrea Lodolo, CTO of CA Southern Africa. “Traditionally, if you’re a large organisation and you need new servers, the business unit puts in a requisition and it takes weeks or months to order and build the hardware. With virtualisation, if you have your processes and management systems in place, you can do it in hours or days. That means you can get your business applications out there a lot faster, which means a faster return.”
Another major benefit is high system availability and greatly simplified disaster recovery.
“We have the technology now to run two virtual machines in lockstep, so you can have one production machine and one backup,” says Mufford. “If anything happens to one, the other takes over without missing a beat.”
Next stop, the desktop
These are the routinely-cited benefits of first-wave virtualisation. But once you’ve taken that first step, a lot of new paths open up. One of the most enticing for many organisations at the moment is desktop virtualisation.
“The cost of maintaining PCs on every desktop is astronomical,” says Cary de Sousa, enterprise relationship manager at Citrix. “It consumes up to 60 percent of the IT budget in many organisations. So as companies approach their PC refresh cycles, they’re starting to ask, ‘Why are we doing this again?’. It’s a good opportunity to choose a different path. They can stop looking at the desktop as a device and rather provide it as a service from the data centre.”
Virtual desktops can run on anything from a thin client terminal to an iPad, making it possible to offer users systems that look and behave exactly like their current PCs.
For the desk-bound, a thin client device may be all that’s needed – and with no moving parts and an average lifespan of five to seven years, it needs zero maintenance.
Mobile workers still need independent storage and processing power for the times they’re not connected to the network, but they can still use virtual desktops.
“People have all sorts of personal mobile devices now and they still want access to the office even if it’s just to check their e-mail,” says Dimension Data principal architect Rudy Gopaul. “It’s an IT challenge because of all the security, compliance and management risks it introduces. But if you offer a virtual desktop, you can split off the corporate from the personal stuff and keep it secure, with all your policies and governance in place.”
“A lot of companies are moving to a ‘bring your own computer’ model,” confirms De Sousa. “We give you a spec, an allowance and access to a secure corporate desktop, but the computer is yours. You can let the kids play on it, download whatever they want – and all that goes into the background when you connect to the corporate desktop. This allows innovation around different use scenarios. We now offer a secure, personalised corporate desktop to any user, anywhere, on any network, using any device.”
Despite the obvious advantages of desktop virtualisation, says Sean Owen-Jones, CEO, NCSolutions, few traditional hardware suppliers will suggest it to their clients. “It’s very disruptive for people who rely on maintenance contracts for their revenue,” he says. “But for the customer, it makes a lot of sense. Virtualisation can extend the life of your PCs by a couple of years, or you can replace them with thin clients that are far more energy-efficient. It also makes the management of those machines more efficient. You get better security, easier upgrades and patch management and less theft.”
Critically, says Owen-Jones, the success of desktop virtualisation hinges on user experience. “The experience has got to be the same or better if it’s going to be successful,” he says. “You must have a stable network. Bandwidth is no longer such an issue, partly because there’s more available and partly because the protocols shipping the data are becoming more streamlined. We have clients running full-motion video on dumb terminals with no problem.”
Taking voice virtual
Another set of opportunities that opens up once organisations start taking their IT virtual is telephony. If you’ve dematerialised most of your servers, why not do the same to your PBX?
“Telephony and the data centre have always been separate domains run with different skill sets,” says Hannes van der Merwe, Itec Distribution’s product manager for Mitel. “But IP (internet protocol) telephony has now developed to the point where you can put it all on the same virtual server infrastructure as everything else.”
All the usual benefits of virtualisation apply: quick upgrades, instant failover to backup versions if anything goes wrong, more efficient use of financial and human resources, and so on.
There are two ways to convert to a virtual PBX or soft switch. The simplest is to install a border gateway, a kind of modem that translates traffic from ISDN voice signals to IP. The other is to abandon traditional telephony links entirely, transferring all your voice traffic (suitably prioritised) to the same pipes you use for all your other data.
“A SIP (session initiation protocol) trunk from your service provider direct to a soft PBX on your IP network is becoming the norm,” says Van der Merwe. “It’s easier to manage and more cost-effective.”
Telephony is not typically for virtualisation virgins.
“The customer who will go for virtual telephony is the one who’s already learned the ropes of virtual infrastructure,” says Van der Merwe. “They have the team and the skills, they have the existing infrastructure and they have some experience in managing it. Their priority is not cost, but stability and management.”
Avoiding the pitfalls
Stability and management, it turns out, are precisely the areas in which those who’ve virtualised are likely to encounter problems. Things get so easy that people forget to manage them carefully, and lose control.
“People dive into virtualisation because it’s so easy,” says CA Southern Africa’s Lodolo. “It’s so simple to provision new servers, you start turning them on for anybody who asks and before you know what’s happened, you have hundreds and you don’t know where they’re physically located. That’s server sprawl.”
Server sprawl, in turn, can lead to server stall. “People panic over losing control so the pendulum swings back and they give up on virtual environments altogether,” says Lodolo. “Actually, what they need is the right management tools. And those have to be automated tools. You can’t possibly keep track of hundreds of servers and thousands of applications manually.”
“Any environment that’s not managed correctly will become more complex,” agrees Matthew Lee, channel manager of Datacentre Solutions at Novell SA. “If that happens, all the money you’ve saved on hardware and licensing could just get lost on management. Virtual environments need to be automated and standardised if you’re really going to see the economies of scale that yield a good return on the investment.”
Lee says that in a well-managed virtual environment, organisations should be able to work on ratios of around 15 virtual machines to each physical one.
“If you really embrace the software the vendors are providing, you can manage many more servers with the same human resources. But fear of pushing the envelope is keeping many people dabbling around ratios of five or six to one.”
The skills required to run IT infrastructure are shifting, says Owen Cole, technical director of F5 Networks. “The skills are no longer so much technical as about monitoring, control and compliance. You can get the detailed map of where everything is and what it’s doing from a console; the challenge is to manage it all. A few years ago, IT managers were afraid that virtualisation was going to automate them out of their jobs, but most people have adapted to deploying a skill set that’s more strategic and business-oriented.”
Where to next?
Take it a few steps further, and a virtual environment becomes a digital toybox offering radically flexible resources.
“We’re starting to see applications that run directly on the hypervisor, bypassing the operating system,” says Lodolo. “An operating system takes up lots of CPU cycles because it does so much other stuff; in a virtual environment, you start to question the need for it.”
He envisages a future in which applications will come packaged with just enough operating system to talk to the drivers and hardware – the hypervisor will do the rest.
Owen expects applications to become increasingly customised to individual users and their contexts. Intelligent application delivery architectures, he says, will serve up functionality according to who the user is, where they are, what other applications they’re using and what’s happening on the network.
For example, he says, “a device with this level of intelligence can measure the latency between each user and the data centre, and compress the data if that latency exceeds a certain threshold.”
Why is that useful?
“If I have an application that serves a thousand users, half of them in the office and half outside on the road, I don’t want to compress all the data because it puts a lot of strain on the infrastructure and is pointless for users inside the building. But for users on the road, it makes a big difference. The future is all about being able to apply business logic and intelligence dynamically, on an individual user basis.”