> One of the significant benefits of vDPA is its strong abstraction, enabling the implementation of virtio devices in both hardware and software—whether in the kernel or user space. This unification under a single framework, where devices appear identical for QEMU facilitates the seamless integration of hardware and software components.
I honestly don’t know what this means. Is it faster? Is it more secure? Why would I use this vDPA thing instead of the good’ol virtio blk driver? Looking at the examples it certainly looks more cumbersome to setup…
I don't get it either, and I'm a maintainer of SPDK which provides multiple implementations of virtualized devices and is frequently used inside DPUs to present storage devices.
If I'm implementing a hardware device anyway, why would I not just use NVMe as the interface? NVMe is superior to virtio-blk in every way that I can think of.
Even for a software device in userspace, why not use a technology like vfio-user to present an NVMe device, or just use vhost-user to present the virtio-blk device?
I've never really been able to get a clear value proposition for vDPA for storage laid out for me. Maybe I'm missing something critical - it's certainly possible.
Software block devices let you do all implement all sorts of disk-like things in userspace. nbdkit is a more flexible version of this (also compatible with qemu), so to get a flavour of the types of things you can do just peruse the list of nbdkit plugins: https://libguestfs.org/nbdkit.1.html#Plugins
Ceph Block Device 3rd Party Integrations: https://docs.ceph.com/en/latest/rbd/rbd-integrations/ :
Kernel Modules, QEMU, libvirt, Kubernetes, Nomad, OpenStack, CloudStack, LIO iSCSI Gateway, Windows, qemu: https://docs.ceph.com/en/latest/rbd/qemu-rbd/#qemu-and-block...
`cephadm bootstrap` requires docker or podman and ssh:
Ceph Object Gateway: radosgw: https://docs.ceph.com/en/latest/radosgw/ :
> The Ceph Object Gateway provides interfaces that are compatible with both Amazon S3 and OpenStack Swift, and it has its own user management. Ceph Object Gateway can use a single Ceph Storage cluster to store data from Ceph File System and from Ceph Block device clients. The S3 API and the Swift API share a common namespace, which means that it is possible to write data to a Ceph Storage Cluster with one API and then retrieve that data with the other API.
virtio-blk is probably faster, but then do HA redundancy with physically separate nodes and network io anyway; or LocalPersistentVolumes
RH has a blog series which motivates and introduces vDPA.
For a start, read their post on virtio-networking and vhost-net , continue with virtio-networking and DPDK  and finally read "Achieving network wirespeed in an open standard manner: introducing vDPA" .
In particular the last link has a lot of diagrams, recaps a lot of the virtio-networking topics and finally has a table which compares the other solutions to vDPA.
That seems to talk about networking only, while the blog post talks about storage. For networking I get it, there's no vendor-independent data path standard for easy virtualization there. But for storage, NVMe is supposed to be that.
Good timing considering the esxi news https://kb.vmware.com/s/article/2107518?lang=en_US
Wow, I knew things were getting really bad with VMWare, but I didn’t know they had got that bad… I liked ESXi, the free hypervisor was a good stepping stone to get people into the ecosystem and probably sold a lot of vSphere and vCenter licences down the line.
Very glad I decided to use Proxmox (as the Broadcom acquisition had been announced) for the small cluster we set up at our business a year ago, it’s been super stable, pretty easy to use but can do way more than I need (we’re not doing much that is that complicated yet)
> Along with the termination of perpetual licensing, Broadcom has also decided to discontinue the Free ESXi Hypervisor, marking it as EOGA (End of General Availability).
So it’s subscription or bust? Owch.
I really wonder how much longer the VMUG licensing is going to be around...
I think they will give up mindshare.
I use proxmox, which is hard to use but open. It is powerful, but you sort of have to be a sysadmin to climb the learning curve with it.
But I have lots of friends who are busy at home, but use vmware there. To them it is the same sort of "free" as proxmox.
They create an abstract model of vmware in their heads.
And when they're at work, they think of virtualization kinds of problems in terms of vmware. So at work, that's the tool they frequently reach for.
No more free at all? I learned virtualization w/ vmware gsx long before I ever had a company to use it at.. Oh well, vmwares irrelevant to me nowadays.
Is vDPA a standardization of the storage interfaces used by AWS Nitro?
> The Nitro System is comprised of three main parts: the Nitro Cards, the Nitro Security Chip, and the Nitro Hypervisor. The Nitro Cards are a family of cards that offloads and accelerates IO for functions ... The Nitro architecture also enabled us to make the hypervisor layer optional and offer bare metal instances. Bare metal instances provide applications with direct access to the processor and memory resources of the underlying server.
ah, RH and docker and their race to poke out of emulation/virtualization for fringe performance gains.
we will soon get to a point qemu will just be a job manager for software that plugs directly into local syscalls, like, you know, regular software running on your system.
Another top-level comment has covered this at least with regards to networking, hasn't discussed the storage side as much:
Yes, I've seen some clearer cases made for networking.
In networking there is no standard for the hardware interface. Every vendor does their own thing. Except many can at least handle virtqueues carrying virtio-net messages for the data path, so some framework like vDPA may make sense (I'd prefer to see a full NIC interface standard emerge instead).
In storage, however, the industry has agreed on NVMe. This is a full standard for control and data plane. All storage products on the market, including DPUs and SmartNICs, just present NVMe devices. So there's no case to be made for vDPA at all. It just isn't necessary.
S3 or radosgw are not very relevant to a discussion about block storage devices.
Ceph's RBD is "special" in the sense that the client understands the clustering, and talks to multiple servers. If you wanted that in the mix, you'd have to run a local Ceph client -- like the Ceph software stack exposing block devices from kernel does. The only way I can see vDPA being relevant to that is to avoid middlemen layers, VM -> host kernel block device abstraction -> Ceph kernelspace RBD client. But the block device abstraction is pretty thin.
The real use case for vDPA, when talking about storage, seems to be standardizing an interface the hardware can provide. And then we're back to "why not NVMe?".
I'm really worried who is going to handle network and raid going forward.
LSI was the standard for hardware raid, 3com and others big network names. Now they are Broadcom, which basically means:
* do as little as possible
* axe most support
* screw the little guy
* milk milk milk
* no real heavy R&D
This means, to me, that lots of hardware and software is being handled far less diligently than Boeing.
And that scares me.
Hardware RAID is much less of a necessity nowadays than it was before. There are multiple options for highly performant/redundant storage, be it localised to a single box or across multiple boxes (such as ZFS or Ceph).
Yes, I see your point and agree that NVMe can be used for the same purpose.
But several HW vendors have implemented virtio-net devices in their SmartNic and may find it convenient to support virtio-blk to reuse most of the building blocks.
As for vhost-user, it's perfect for VM use cases, but with containers or applications in the host, it's not easy to use. Whereas, a vDPA device (HW or SW) can easily be attached to the host kernel (using the virtio-vdpa bus) and be managed with the standard virtio-blk driver.
Yes exactly, between ZFS and Ceph I'm not too worried - and at the bleeding edge, NVMe is just changing the game completely because the drives are becoming so fast that they need their own dedicated PCIe lanes, so you don't want to bottleneck it through an HBA in any case, and the HBA just increases access latency.