(Solution Strategist, EMC)
Virtualization did not begin with PCs. Remember the mainframe?, How about VLANs?
Virtualization changes your root of trust. Root of trust begins with physically separate or 'air gap' systems - readily tangible to more than one of your senses. Once you are confronted with a layer of virtualization, what are your metrics to trust the hosting platform?
The tools are largely the same, the training is for IT to think in an additional dimension. As it is IT spend should be aligned to business priorities and change with equal agility. Rather tough if a signficiant portion of your IT spend is on assets whose business value has been maximized long before they can be full depreciated.
(Owner, Antarctic Technologies)
I think before getting stuck into details, we need to be clear about just what virtualization means.
Virtualization means you have n guests running on one physical computer, and it sits on top of a host OS (unless you use a dedicated hypervisor such as Xen which doesn't require an operating system underneath it).
So... Looking first at the dedicated hypervisor, we have multiple guests, all logically separate systems, running on a single piece of hardware.
What are the main threats?
1) direct guest to guest interaction - whether it is reading/writing memory of another guest, interception of I/O etc..
2) guest to host interaction - a guest breaks out to the host level where it can hijack the host to perform privileged operations
3) virtualization of the host itself from a guest to hijack the entire system nearly entirely transparently
2 and 3 would be more serious breaches, as they can modify guest behavior, snoop on them etc.. completely transparently, and the guests very likely will never detect that they or the host have been compromised.
In the case of point 1, direct guest to guest access could only be possible due to some bug in the virtualization software. This bug could mean that such interaction could or could not be easily detected.
In terms of *how* this would be achieved, that is largely a matter of detail, but in each case would simply be a means to an end; the VM being attacked would need some method of checking the integrity of its operating environment, and that of the host, which creates a paradox in that the fact the host may have been compromised then means the guest can not reliably do anything as by definition it can be manipulated into thinking everything is fine.
What it requires is some form of additional, yet separate, monitoring hardware that can somehow interrogate a system and monitor for additional processor usage for example that is indicative of a problem. Looking from the outside inwards, it may be possible to see rogue code intercepting the legitimate hypervisor, and raise an alert.
At the present time, such capability doesn't exist. As usual, it is a bad idea to get a system that can be manipulated in ways not previously considered at design time, to do self-monitoring using the very methods that are open to attack.
Is training needed? Definitely. Not least, to make sys admins aware that the system could be compromised and it being very hard to detect. They need a deep understanding of how they work, and how they can be attacked. They also need tools/methods they can run to periodically check the integrity of the system, separate to that system itself, pending some new hardware designs that incorporate it.
==== Do organizations or IT departments have to rethink how they architect and deploy solutions for the business? ====
This depends on what the system is. If it is a sensitive system (e.g. a system that processes credit cards) then it may be wise to run this stand-alone, and not in a virtual environment. It then leaves more traditional attack vectors, for which there are tools widely available to deal with the threats.
We have a problem at the current time in that technology is ahead of itself in terms of security. As always, technology is invented and deployed before it is properly understood, and if necessary, changed, to work better. Of course, once it is in use, it is very hard, if not impossible, to change the way it operates as it could easily break something.
IMHO people should only deploy virtualization for systems/processes that aren't critical. If the system needs to be secure, then virtualization is not an option, as it creates too many unknowns.
(Blogger, Freelance Writer)
Virtualization is one of the reasons why security has become more complicated, but there are others. Traditionally, software and processing tasks resided on specific devices, so it was relatively straightforward what the IT department needed to do in order to monitor access and possible security breaches. With virtualization (not only with servers but also with networks, storage systems, and applications) as well as cloud computing (similar concepts but moving transactions off site) and mobility (data stored on just about every imaginable type of device), there is no longer a clear monitoring point. In effect, the traditional firewall has been shredded. Without such a point, it becomes difficult for IT to determine what is happening with its systems.
To make the matter even more vexing, the level of sophistication seen with the attacks has been increasing. Organized crime has determined it is much more lucrative and less demanding to break into systems virtually than the old fashion, manual way. Because the security environment is so dynamic, it requires constant monitoring and regular updating. My sense is most organizations take a look at their systems when they do a major update. How thorough that evaluation is unclear because their focus is often on the new potential benefits of the new software and not on the possible security implications. They then maintain the status quo until a major breach occurs. It is a bit dangerous, but as Robin noted security technology tends to lag behind new capabilities, so a better alternative is often not available.