Paperjam.lu

Guillaume Field (Dell): “Like the mobile phone,<br/>we will wonder how we lived<br/>without virtualisation.” (Photo: Etienne Delorme) 

“Virtualisation is at an inflection point today,” says Guillaume Field of Dell.  Over the last five years virtualization had been embraced only by early adopters and organisations that could see a significant cost benefit – indeed, estimates suggest just 10% of workloads today are virtualised. But with new entrants into the market such as Microsoft introducing its Hyper-V, the Cirtix acquisition of the Xen source, and with VMware continuing to be the market leader, there is going to be a lot more choice at different price points. This, says Field, will allow small and medium-size businesses to deploy virtualization at no or minimal cost. “So we will go from 10% to 20, 30 or 40% maybe within 24 or 36 months. And once that whole system swaps over, the standard will be virtualised and it will be exception that stays in the physical. Software providers and developers will just code for virtualisation. And like the mobile phone, we will wonder how we lived without it.”

Field was one of two speakers at a recent AmCham ComIt event on virtualisation hosted at Siemens Luxembourg. He was joined on the platform by Frédéric Bouvet from DS Improve – a Brussels-based IT professional company – who focused on the performance economic and Green IT benefits of virtualization. For instance, studies indicate that power consumption of around 45% can be made by the consolidation that virtualisation provides. Furthermore, the speakers explain, virtualization allows for much better planning of server capacity requirement, which means budgets for IT can be submitted well in advance.

But as virtualisation makes strides in terms of technology and accessibility, Field admits there are still some challenges for customers, most notably how to integrate all the different tools from different vendors into one overriding management architecture. “That requires a more holistic view of customers’ systems management platforms. They need to focus on getting the same environment for their physical and virtual environments and should avoid vendor lock-in.” He explains that Dell’s systems management strategy is not to replace customers’ products but to integrate into them. An example is Dell OpenManage, which can be used by itself or be integrated into an overriding systems management framework. “So even if a customer has invested heavily in Tivoli or Microsoft Systems Center or a multitude of others, even from other vendors, they still use that one Single Pane of Glass and we just report our information into it and when you need to action it just opens when required.”

Field says the future will also see much better integration of the server, network and storage virtualisation stacks. Dell has an EqualLogic product range that is now integrating VMware’s disaster recovery elements, specifically their SRM (Site Recovery Manager) products. “In about five mouse clicks we put one of our arrays in a primary data centre, another in a secondary data centre so that in effect you have live replication. So if your primary data centre fails your virtual machines are preserved – the disaster recovery is very automated. In the past it would have been complex to architect and complex to deploy, and therefore very expensive and not entirely reliable.”

The word fluidity crops up frequently when talking about virtualisation. Field explains that live migration, the process of allowing users to safely move workloads from one platform to another, is key to virtualisation’s success. “Once you accept that works, you are able to do so much with it, whether it’s sizing on demand, provisioning on demand or shutting down servers when they are not needed. This allows fluidity that end users don’t even notice, because there’s no intervention.”

Another challenge is with shrinking environments. Adding capacity is no problem, says Field, but when the opposite environment applies and users want to go to 30% capacity, it requires turning off servers. The problem with that is that a lot of hardware failures occur during power on-power off cycles. “So we are working on having better sleep states and better integration of hyper-relayers – so it is a combination of software and hardware engineering and I think we will see a lot more of that.”