Almost exclusively a PCI Express issue, virtualization addresses processing clusters and links them to PCI Express end nodes. The system can employ standard hosts and end nodes or ones that can handle IOV, with the latter providing enhanced functionality.
With I/O virtualization, a single physical end-node device can appear as multiple logical devices. This can be useful in handling multiple hosts. But it’s even more important when the hosts are running virtual machines (VMs) and each VM wants to access devices directly. For example, a single Ethernet device could handle all the logical hosts in a rack.
Without IOV, the device drivers would have to recognize that each host is vying for a single device. Otherwise, a single host would control the device, and all other hosts would have drivers that interacted with the host that controlled the device.
Functionally, all approaches are the same. In actuality, the major difference is performance. Without IOV, hosts have an additional level of complexity and redirection that adds overhead. With IOV, the same device drivers are used, and they run at wire speeds. Of course, this puts the onus on the devices that need additional I/O queues and processing power to handle multiple masters.
High-end, IOV-based Ethernet controllers will typically handle dozens to hundreds of hosts. SATA and SAS storage controllers will normally handle about half a dozen since the support overhead is greater. Likewise, additional host processing may be occurring. This would allow logical file servers in a system to have direct access to the hardware while providing file services to other virtual hosts.