We all feel it. The IT world is developing at a rapid pace. Currently, the traditional application sphere is being reinvented. Web 3.0 incoming from many angles. Microservices, distributed systems, scaling, automation and a multi cloud universe heavily rely on networking.
Better, faster and stronger services emerge and make our life easier. New concepts are required and appear on the horizon.
So, I may present you the next iteration of datacenter infrastructure:
SoC (System on Chip) based smart network interface cards (NICs) = DPU
Let’s dive deeper into the topic of the next generation of network cards.
First, we start with a conventional NICs. Regularly they are either onboard or as a dedicated card installed in diverse devices, like in the picture below. They are doing networking pretty good and fast. Their purpose is to connect servers or desktops to the network via different kind of interfaces. It is all about reliable and fast connection speeds and low latency. They are enabling wired communication between different servers, desktops and other devices via the transmission of IP packets.
What is the difference if you compare Smart NICs to conventional NICs?
You see the difference in the next picture. Basically, a DPU is a computer on a NIC and offers new capabilities. Now the NIC is smart. Furthermore, it has a dedicated management port, like the iDRAC or ILO port you know from the server management interface.
In addition, the next generation already offers a stunning 400Gbit Ethernet connection per port. Soon PCIe lanes will be the limiting factor of the network, whoever thought of that?
DPUs enable vast new capabilities
Conventional network cards enable communication for enterprise (multi-tenant) workoads.
In contrast, DPUs offer new functions like a programmable data plane for all kind of fancy new workloads (Big Data, ML and AI). Also, they can participate in diverse software defined networking concepts and ensure granular monitoring and reporting. They are real function monsters that execute network address translation (NAT), load balancing (LB) or even stateless firewall rules themselves.
They also can overtake various functions that were previously executed by the central processing unit (CPU). Some examples are erasure coding, deduplication or even encryption of data at rest and on the fly. GPU over Ethernet? Easy!
These new offloading mechanisms reduce the load on the CPU and allow for even much more dense workloads.
Which vendors are already offering these new high-performance cards?
NVIDIA is known for its graphic accelerator cards, but they are one of the leading producers of DPUs with their Bluefield series.
Other vendors like INTEL are also developing and producing DPUs.
At the major cloud providers like AWS and Azure, DPU are already in use. They even produce their own DPU hardware ecosystem (AWS Nitro).
What about VMware?
I have seen it first hand at VMworld 2020 it is called project Monterey. There are several initiatives in this project like enabling their leading Hypervisor (ESXi) to be installed on these network cards. This offers new possibilities like managing a bare-metal server via its network card in vCenter server. Or air-gapping and the isolation of critical applications. This is huge!
Do you remember ESXi on ARM, like a raspberry pie? This is the real cutting-edge use case for non-x86-ESXi.
Have you been registered for VMworld 2021? Log in with your VMworld Account and enjoy 56 minutes packed with project Monterey? Here is the link.
Overview of the use cases
In short, these new cards just have a wider adoption among telco and cloud service providers at the moment. I haven’t seen them at any of our customers yet. So, I hope I have soon hands on and can experiment with this powerful technology myself. Maybe I can upgrade my Homelab with an affordable device in the near future and show you a demo myself.
Thanks for reading, stay healthy and save!