Skip to main content
Unlisted page
This page is unlisted. Search engines will not index it, and only users having a direct link can access it.

SR-IOV Networking (Legacy Method)

Single Root I/O Virtualization (SR-IOV) assigns hardware resources directly to individual virtual machines, increasing bandwidth, reducing latency, and providing other performance improvements. This technology is particularly useful in environments requiring high-performance networking between virtual machines. Hyperstack offers the following GPU virtual machines compatible with high-performance Ethernet SR-IOV technology: H100, H100 with NVLink, and A100 with NVLink,

note

SR-IOV is available for contracted users upon request. If you are interested in upgrading your networking to SR-IOV, please contact our technical support team at [email protected].

How to upgrade your virtual machine to SR-IOV networking

After you have received notice from our technical support team that your request has been approved, please follow the procedures listed below to upgrade your VM to SR-IOV networking.

  1. Login to your VM using SSH.

  2. Run the following command to check if the SR-IOV network interface is available:

    sudo ip link show

    The new SR-IOV VF-LAG interface is the one with MTU 8950. Record that interface name and mac address.

  3. Update your /etc/netplan/50-cloud-init.yaml with:

    # This file is generated from information provided by the datasource.  Changes
    # to it will not persist across an instance reboot. To disable cloud-init's
    # network configuration capabilities, write a file
    # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
    # network: {config: disabled}
    network:
    version: 2
    ethernets:
    <original network if name>:
    dhcp4: true
    match:
    macaddress: <original network if mac>
    mtu: 1500
    set-name: <original network if name>
    <new sriov interface name>:
    dhcp4: true
    dhcp4-overrides:
    use-routes: false
    use-dns: false
    match:
    macaddress: <sriov interface mac>
    mtu: 8950
    set-name: <new sriov interface name>
  4. Run the following command to apply the changes:

    sudo netplan try --timeout 30

    If the changes are successful and you can still access the VM, confirm the changes. If the changes are unsuccessful, the system will revert to the previous configuration after 30 seconds.

  5. Run the following command to check if the SR-IOV network interface is up:

    $ sudo ethtool <new sriov interface name>
    Settings for ens4:
    Supported ports: [ FIBRE ]
    Supported link modes: 1000baseT/Full
    10000baseT/Full
    1000baseKX/Full
    10000baseKR/Full
    10000baseR_FEC
    40000baseKR4/Full
    40000baseCR4/Full
    40000baseSR4/Full
    40000baseLR4/Full
    25000baseCR/Full
    25000baseKR/Full
    25000baseSR/Full
    50000baseCR2/Full
    50000baseKR2/Full
    100000baseKR4/Full
    100000baseSR4/Full
    100000baseCR4/Full
    100000baseLR4_ER4/Full
    50000baseSR2/Full
    1000baseX/Full
    10000baseCR/Full
    10000baseSR/Full
    10000baseLR/Full
    10000baseER/Full
    50000baseKR/Full
    50000baseSR/Full
    50000baseCR/Full
    50000baseLR_ER_FR/Full
    50000baseDR/Full
    100000baseKR2/Full
    100000baseSR2/Full
    100000baseCR2/Full
    100000baseLR2_ER2_FR2/Full
    100000baseDR2/Full
    200000baseKR4/Full
    200000baseSR4/Full
    200000baseLR4_ER4_FR4/Full
    200000baseDR4/Full
    200000baseCR4/Full
    100000baseKR/Full
    100000baseSR/Full
    100000baseLR_ER_FR/Full
    100000baseCR/Full
    100000baseDR/Full
    200000baseKR2/Full
    200000baseSR2/Full
    200000baseLR2_ER2_FR2/Full
    200000baseDR2/Full
    200000baseCR2/Full
    Supported pause frame use: Symmetric
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes: 1000baseT/Full
    10000baseT/Full
    1000baseKX/Full
    10000baseKR/Full
    10000baseR_FEC
    40000baseKR4/Full
    40000baseCR4/Full
    40000baseSR4/Full
    40000baseLR4/Full
    25000baseCR/Full
    25000baseKR/Full
    25000baseSR/Full
    50000baseCR2/Full
    50000baseKR2/Full
    100000baseKR4/Full
    100000baseSR4/Full
    100000baseCR4/Full
    100000baseLR4_ER4/Full
    50000baseSR2/Full
    1000baseX/Full
    10000baseCR/Full
    10000baseSR/Full
    10000baseLR/Full
    10000baseER/Full
    50000baseKR/Full
    50000baseSR/Full
    50000baseCR/Full
    50000baseLR_ER_FR/Full
    50000baseDR/Full
    100000baseKR2/Full
    100000baseSR2/Full
    100000baseCR2/Full
    100000baseLR2_ER2_FR2/Full
    100000baseDR2/Full
    200000baseKR4/Full
    200000baseSR4/Full
    200000baseLR4_ER4_FR4/Full
    200000baseDR4/Full
    200000baseCR4/Full
    100000baseKR/Full
    100000baseSR/Full
    100000baseLR_ER_FR/Full
    100000baseCR/Full
    100000baseDR/Full
    200000baseKR2/Full
    200000baseSR2/Full
    200000baseLR2_ER2_FR2/Full
    200000baseDR2/Full
    200000baseCR2/Full
    Advertised pause frame use: No
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Speed: 200000Mb/s
    Duplex: Full
    Auto-negotiation: on
    Port: FIBRE
    PHYAD: 0
    Transceiver: internal
    Supports Wake-on: d
    Wake-on: d
    Link detected: yes

    While the interface speed will be reported as 200Gbps, the actual speed can be up to 2x200Gbps due to our hardware SR-IOV bonding.

  6. Check if the mlx5 driver is loaded:

    $ sudo lsmod | grep mlx5
    mlx5_vdpa 73728 0
    vringh 45056 1 mlx5_vdpa
    vhost_iotlb 16384 2 vringh,mlx5_vdpa
    vdpa 32768 1 mlx5_vdpa
    mlx5_ib 471040 0
    ib_uverbs 188416 1 mlx5_ib
    ib_core 516096 2 ib_uverbs,mlx5_ib
    mlx5_core 2359296 2 mlx5_vdpa,mlx5_ib
    mlxfw 36864 1 mlx5_core
    psample 20480 1 mlx5_core
    tls 151552 1 mlx5_core
    pci_hyperv_intf 12288 1 mlx5_core