1. Datapath Video
  2. Datapath Video Wall
Datapath card

DirectShow drivers for WDM Streaming supports the following applications, to encode, record and stream video over networks or the Internet: Microsoft Media Encoder. For additional security and peace of mind, our customers are invited to take advantage of Datapath’s. Subsystem Device Driver Device Specific Module (SDDDSM) is IBM’s multipath IO solution based on Microsoft MPIO technology, it’s a device specific module specifically designed to.

-->

This topic describes the steps for a NetAdapterCx client driver to initialize and start WDFDEVICE and NETADAPTER objects. For more info about these objects and their relationship, see Summary of NetAdapterCx objects.

EVT_WDF_DRIVER_DEVICE_ADD

Datapath video wall

A NetAdapterCx client driver registers its EVT_WDF_DRIVER_DEVICE_ADD callback function when it calls WdfDriverCreate from its DriverEntry routine.

In EVT_WDF_DRIVER_DEVICE_ADD, a NetAdapterCx client driver should do the following in order:

  1. Call NetDeviceInitConfig.

  2. Call WdfDeviceCreate.

    Tip

    If your device supports more than one NETADAPTER, we recommend storing pointers to each adapter in your device context.

  3. Create the NETADAPTER object. To do so, the client calls NetAdapterInitAllocate, followed by optional NetAdapterInitSetXxx methods to initailize the adapter's attributes. Finally, the client calls NetAdapterCreate.

    The following example shows how a client driver might initialize a NETADAPTER object. Note that error handling is simplified in this example.

Optionally, you can add context space to the NETADAPTER object. Since you can set a context on any WDF object, you could add separate context space for the WDFDEVICE and the NETADAPTER objects. In the example in step 3, the client adds MY_ADAPTER_CONTEXT to the NETADAPTER object. For more info, see Framework Object Context Space.

We recommend that you put device-related data in the context for your WDFDEVICE, and networking-related data such as link layer addresses into your NETADAPTER context. If you are porting an existing NDIS 6.x driver, you'll likely have a single MiniportAdapterContext that combines networking-related and device-related data into a single data structure. To simplify the porting process, just convert that entire structure to the WDFDEVICE context, and make the NETADAPTER's context a small structure that points to the WDFDEVICE's context.

You can optionally provide 2 callbacks to the NET_ADAPTER_DATAPATH_CALLBACKS_INIT method:

Drivers

For details on what to provide in your implementations of these callbacks, see the individual reference pages.

EVT_WDF_DEVICE_PREPARE_HARDWARE

Many NetAdapterCx client drivers start their adapters from within their EVT_WDF_DEVICE_PREPARE_HARDWARE callback function, with the notable exception of Mobile Broadband class extension client drivers. To register an EVT_WDF_DEVICE_PREPARE_HARDWARE callback function, a NetAdapterCx client driver must call WdfDeviceInitSetPnpPowerEventCallbacks.

Within EVT_WDF_DEVICE_PREPARE_HARDWARE, in addition to other hardware preparation tasks the client driver sets the adapter's required and optional capabilities.

NetAdapterCx requires the client driver to set the following capabilities:

  • Data path capabilities. The driver calls NetAdapterSetDataPathCapabilities to set these capabilities. For more information, see Network data buffer management.

  • Link layer capabilities. The driver calls NetAdapterSetDataPathCapabilities to set these capabilities.

  • Link layer maximum transfer unit (MTU) size. The driver calls NetAdapterSetDataPathCapabilities to set the MTU size.

The driver must then call NetAdapterStart to start their adapter.

The following example shows how a client driver might start a NETADAPTER object. Note that code required for setting up each adapter capabilities method is left out for brevity and clarity, and error handling is simplified.

Datapath video

Enhanced data path is a networking stack mode, which when configured provides superior network performance. It is primarily targeted for NFV workloads, which requires the performance benefits provided by this mode.

The N-VDS switch can be configured in the enhanced data path mode only on an ESXi host.

In the enhanced data path mode, you can configure:

  • Overlay traffic

  • VLAN traffic

Datapath Video

High-level process to configure Enhanced Data Path

As a network administrator, before creating transport zones supporting N-VDS in enhanced data path mode, you must prepare the network with the supported NIC cards and drivers. To improve network performance, you can enable the Load Balanced Source teaming policy to become NUMA node aware.

The high-level steps are as follows:

  1. Use NIC cards that support enhanced data path.

    See VMware Compatibility Guide to know NIC cards that support enhanced data path.

    On the VMware Compatibility Guide page, under the IO devices category, select ESXi 6.7, IO device Type as Network, and feature as N-VDS Enhanced Datapath.

  2. Download and install the NIC drivers from the My VMware page.

  3. Create an uplink policy.

    See Create an Uplink Profile.

  4. Create a transport zone with N-VDS in the enhanced data path mode.

    See Create Transport Zones.

  5. Create a host transport node. Configure the enhanced data path N-VDS with logical cores and NUMA nodes.

    See Create a Host Transport Node.

Load Balanced Source Teaming Policy Mode Aware of NUMA

The Load Balanced Source teaming policy mode defined for an enhanced datapath N-VDS becomes aware of NUMA when the following conditions are met:

  • The Latency Sensitivity on VMs is High.

  • The network adapter type used is VMXNET3.

If the NUMA node location of either the VM or the physical NIC is not available, then the Load Balanced Source teaming policy does not consider NUMA awareness to align VMs and NICs.

The teaming policy functions without NUMA awareness in the following conditions:

Datapath Video Wall

  • The LAG uplink is configured with physical links from multiple NUMA nodes.

  • The VM has affinity to multiple NUMA nodes.

  • The ESXi host failed to define NUMA information for either VM or physical links.