Data Plane Development Kit
  • Getting Started Guide for Linux
    • 1. Introduction
      • 1.1. Documentation Roadmap
    • 2. System Requirements
      • 2.1. BIOS Setting Prerequisite on x86
      • 2.2. Compilation of the DPDK
      • 2.3. Running DPDK Applications
        • 2.3.1. System Software
        • 2.3.2. Use of Hugepages in the Linux Environment
        • 2.3.3. Xen Domain0 Support in the Linux Environment
    • 3. Compiling the DPDK Target from Source
      • 3.1. Install the DPDK and Browse Sources
      • 3.2. Installation of DPDK Target Environments
      • 3.3. Browsing the Installed DPDK Environment Target
      • 3.4. Loading Modules to Enable Userspace IO for DPDK
      • 3.5. Loading VFIO Module
      • 3.6. Binding and Unbinding Network Ports to/from the Kernel Modules
    • 4. Compiling and Running Sample Applications
      • 4.1. Compiling a Sample Application
      • 4.2. Running a Sample Application
        • 4.2.1. Logical Core Use by Applications
        • 4.2.2. Hugepage Memory Use by Applications
      • 4.3. Additional Sample Applications
      • 4.4. Additional Test Applications
    • 5. Enabling Additional Functionality
      • 5.1. High Precision Event Timer HPET) Functionality
        • 5.1.1. BIOS Support
        • 5.1.2. Linux Kernel Support
        • 5.1.3. Enabling HPET in the DPDK
      • 5.2. Running DPDK Applications Without Root Privileges
      • 5.3. Power Management and Power Saving Functionality
      • 5.4. Using Linux Core Isolation to Reduce Context Switches
      • 5.5. Loading the DPDK KNI Kernel Module
      • 5.6. Using Linux IOMMU Pass-Through to Run DPDK with Intel® VT-d
      • 5.7. High Performance of Small Packets on 40G NIC
        • 5.7.1. Use 16 Bytes RX Descriptor Size
        • 5.7.2. High Performance and per Packet Latency Tradeoff
    • 6. Quick Start Setup Script
      • 6.1. Script Organization
      • 6.2. Use Cases
      • 6.3. Applications
    • 7. How to get best performance with NICs on Intel platforms
      • 7.1. Hardware and Memory Requirements
        • 7.1.1. Network Interface Card Requirements
        • 7.1.2. BIOS Settings
        • 7.1.3. Linux boot command line
      • 7.2. Configurations before running DPDK
      • 7.3. Example of getting best performance for an Intel NIC
  • Getting Started Guide for FreeBSD
    • 1. Introduction
      • 1.1. Documentation Roadmap
    • 2. Installing DPDK from the Ports Collection
      • 2.1. Installing the DPDK FreeBSD Port
      • 2.2. Compiling and Running the Example Applications
    • 3. Compiling the DPDK Target from Source
      • 3.1. System Requirements
      • 3.2. Install the DPDK and Browse Sources
      • 3.3. Installation of the DPDK Target Environments
      • 3.4. Browsing the Installed DPDK Environment Target
      • 3.5. Loading the DPDK contigmem Module
      • 3.6. Loading the DPDK nic_uio Module
        • 3.6.1. Binding Network Ports to the nic_uio Module
        • 3.6.2. Binding Network Ports Back to their Original Kernel Driver
    • 4. Compiling and Running Sample Applications
      • 4.1. Compiling a Sample Application
      • 4.2. Running a Sample Application
      • 4.3. Running DPDK Applications Without Root Privileges
  • Xen Guide
    • 1. DPDK Xen Based Packet-Switching Solution
      • 1.1. Introduction
      • 1.2. Device Creation
        • 1.2.1. Poll Mode Driver Front End
        • 1.2.2. Switching Back End
        • 1.2.3. Packet Reception
        • 1.2.4. Packet Transmission
      • 1.3. Running the Application
        • 1.3.1. Validated Environment
        • 1.3.2. Xen Host Prerequisites
        • 1.3.3. Building and Running the Switching Backend
        • 1.3.4. Xen PMD Frontend Prerequisites
        • 1.3.5. Building and Running the Front End
        • 1.3.6. Usage Examples: Injecting a Packet Stream Using a Packet Generator
  • Programmer’s Guide
    • 1. Introduction
      • 1.1. Documentation Roadmap
      • 1.2. Related Publications
    • 2. Overview
      • 2.1. Development Environment
      • 2.2. Environment Abstraction Layer
      • 2.3. Core Components
        • 2.3.1. Ring Manager (librte_ring)
        • 2.3.2. Memory Pool Manager (librte_mempool)
        • 2.3.3. Network Packet Buffer Management (librte_mbuf)
        • 2.3.4. Timer Manager (librte_timer)
      • 2.4. Ethernet* Poll Mode Driver Architecture
      • 2.5. Packet Forwarding Algorithm Support
      • 2.6. librte_net
    • 3. Environment Abstraction Layer
      • 3.1. EAL in a Linux-userland Execution Environment
        • 3.1.1. Initialization and Core Launching
        • 3.1.2. Multi-process Support
        • 3.1.3. Memory Mapping Discovery and Memory Reservation
        • 3.1.4. Xen Dom0 support without hugetbls
        • 3.1.5. PCI Access
        • 3.1.6. Per-lcore and Shared Variables
        • 3.1.7. Logs
        • 3.1.8. CPU Feature Identification
        • 3.1.9. User Space Interrupt Event
        • 3.1.10. Blacklisting
        • 3.1.11. Misc Functions
      • 3.2. Memory Segments and Memory Zones (memzone)
      • 3.3. Multiple pthread
        • 3.3.1. EAL pthread and lcore Affinity
        • 3.3.2. non-EAL pthread support
        • 3.3.3. Public Thread API
        • 3.3.4. Known Issues
        • 3.3.5. cgroup control
      • 3.4. Malloc
        • 3.4.1. Cookies
        • 3.4.2. Alignment and NUMA Constraints
        • 3.4.3. Use Cases
        • 3.4.4. Internal Implementation
    • 4. Ring Library
      • 4.1. References for Ring Implementation in FreeBSD*
      • 4.2. Lockless Ring Buffer in Linux*
      • 4.3. Additional Features
        • 4.3.1. Name
        • 4.3.2. Water Marking
        • 4.3.3. Debug
      • 4.4. Use Cases
      • 4.5. Anatomy of a Ring Buffer
        • 4.5.1. Single Producer Enqueue
        • 4.5.2. Single Consumer Dequeue
        • 4.5.3. Multiple Producers Enqueue
        • 4.5.4. Modulo 32-bit Indexes
      • 4.6. References
    • 5. Mempool Library
      • 5.1. Cookies
      • 5.2. Stats
      • 5.3. Memory Alignment Constraints
      • 5.4. Local Cache
      • 5.5. Mempool Handlers
      • 5.6. Use Cases
    • 6. Mbuf Library
      • 6.1. Design of Packet Buffers
      • 6.2. Buffers Stored in Memory Pools
      • 6.3. Constructors
      • 6.4. Allocating and Freeing mbufs
      • 6.5. Manipulating mbufs
      • 6.6. Meta Information
      • 6.7. Direct and Indirect Buffers
      • 6.8. Debug
      • 6.9. Use Cases
    • 7. Poll Mode Driver
      • 7.1. Requirements and Assumptions
      • 7.2. Design Principles
      • 7.3. Logical Cores, Memory and NIC Queues Relationships
      • 7.4. Device Identification and Configuration
        • 7.4.1. Device Identification
        • 7.4.2. Device Configuration
        • 7.4.3. On-the-Fly Configuration
        • 7.4.4. Configuration of Transmit and Receive Queues
        • 7.4.5. Hardware Offload
      • 7.5. Poll Mode Driver API
        • 7.5.1. Generalities
        • 7.5.2. Generic Packet Representation
        • 7.5.3. Ethernet Device API
        • 7.5.4. Extended Statistics API
    • 8. Cryptography Device Library
      • 8.1. Design Principles
      • 8.2. Device Management
        • 8.2.1. Device Creation
        • 8.2.2. Device Identification
        • 8.2.3. Device Configuration
        • 8.2.4. Configuration of Queue Pairs
        • 8.2.5. Logical Cores, Memory and Queues Pair Relationships
      • 8.3. Device Features and Capabilities
        • 8.3.1. Device Features
        • 8.3.2. Device Operation Capabilities
        • 8.3.3. Capabilities Discovery
      • 8.4. Operation Processing
        • 8.4.1. Enqueue / Dequeue Burst APIs
        • 8.4.2. Operation Representation
        • 8.4.3. Operation Management and Allocation
      • 8.5. Symmetric Cryptography Support
        • 8.5.1. Session and Session Management
        • 8.5.2. Transforms and Transform Chaining
        • 8.5.3. Symmetric Operations
      • 8.6. Asymmetric Cryptography
        • 8.6.1. Crypto Device API
    • 9. IVSHMEM Library
      • 9.1. IVHSHMEM Library API Overview
      • 9.2. IVSHMEM Environment Configuration
      • 9.3. Best Practices for Writing IVSHMEM Applications
      • 9.4. Best Practices for Running IVSHMEM Applications
    • 10. Link Bonding Poll Mode Driver Library
      • 10.1. Link Bonding Modes Overview
      • 10.2. Implementation Details
        • 10.2.1. Link Status Change Interrupts / Polling
        • 10.2.2. Requirements / Limitations
        • 10.2.3. Configuration
      • 10.3. Using Link Bonding Devices
        • 10.3.1. Using the Poll Mode Driver from an Application
        • 10.3.2. Using Link Bonding Devices from the EAL Command Line
    • 11. Timer Library
      • 11.1. Implementation Details
      • 11.2. Use Cases
      • 11.3. References
    • 12. Hash Library
      • 12.1. Hash API Overview
      • 12.2. Multi-process support
      • 12.3. Implementation Details
      • 12.4. Entry distribution in hash table
      • 12.5. Use Case: Flow Classification
      • 12.6. References
    • 13. LPM Library
      • 13.1. LPM API Overview
      • 13.2. Implementation Details
        • 13.2.1. Addition
        • 13.2.2. Lookup
        • 13.2.3. Limitations in the Number of Rules
        • 13.2.4. Use Case: IPv4 Forwarding
        • 13.2.5. References
    • 14. LPM6 Library
      • 14.1. LPM6 API Overview
        • 14.1.1. Implementation Details
        • 14.1.2. Addition
        • 14.1.3. Lookup
        • 14.1.4. Limitations in the Number of Rules
      • 14.2. Use Case: IPv6 Forwarding
    • 15. Packet Distributor Library
      • 15.1. Distributor Core Operation
      • 15.2. Worker Operation
    • 16. Reorder Library
      • 16.1. Operation
      • 16.2. Implementation Details
      • 16.3. Use Case: Packet Distributor
    • 17. IP Fragmentation and Reassembly Library
      • 17.1. Packet fragmentation
      • 17.2. Packet reassembly
        • 17.2.1. IP Fragment Table
        • 17.2.2. Packet Reassembly
        • 17.2.3. Debug logging and Statistics Collection
    • 18. The librte_pdump Library
      • 18.1. Operation
      • 18.2. Implementation Details
      • 18.3. Use Case: Packet Capturing
    • 19. Multi-process Support
      • 19.1. Memory Sharing
      • 19.2. Deployment Models
        • 19.2.1. Symmetric/Peer Processes
        • 19.2.2. Asymmetric/Non-Peer Processes
        • 19.2.3. Running Multiple Independent DPDK Applications
        • 19.2.4. Running Multiple Independent Groups of DPDK Applications
      • 19.3. Multi-process Limitations
    • 20. Kernel NIC Interface
      • 20.1. The DPDK KNI Kernel Module
      • 20.2. KNI Creation and Deletion
      • 20.3. DPDK mbuf Flow
      • 20.4. Use Case: Ingress
      • 20.5. Use Case: Egress
      • 20.6. Ethtool
      • 20.7. Link state and MTU change
      • 20.8. KNI Working as a Kernel vHost Backend
        • 20.8.1. Overview
        • 20.8.2. Packet Flow
        • 20.8.3. Sample Usage
        • 20.8.4. Compatibility Configure Option
    • 21. Thread Safety of DPDK Functions
      • 21.1. Fast-Path APIs
      • 21.2. Performance Insensitive API
      • 21.3. Library Initialization
      • 21.4. Interrupt Thread
    • 22. Quality of Service (QoS) Framework
      • 22.1. Packet Pipeline with QoS Support
      • 22.2. Hierarchical Scheduler
        • 22.2.1. Overview
        • 22.2.2. Scheduling Hierarchy
        • 22.2.3. Application Programming Interface (API)
        • 22.2.4. Implementation
        • 22.2.5. Worst Case Scenarios for Performance
      • 22.3. Dropper
        • 22.3.1. Configuration
        • 22.3.2. Enqueue Operation
        • 22.3.3. Queue Empty Operation
        • 22.3.4. Source Files Location
        • 22.3.5. Integration with the DPDK QoS Scheduler
        • 22.3.6. Integration with the DPDK QoS Scheduler Sample Application
        • 22.3.7. Application Programming Interface (API)
      • 22.4. Traffic Metering
        • 22.4.1. Functional Overview
        • 22.4.2. Implementation Overview
    • 23. Power Management
      • 23.1. CPU Frequency Scaling
      • 23.2. Core-load Throttling through C-States
      • 23.3. API Overview of the Power Library
      • 23.4. User Cases
      • 23.5. References
    • 24. Packet Classification and Access Control
      • 24.1. Overview
        • 24.1.1. Rule definition
        • 24.1.2. RT memory size limit
        • 24.1.3. Classification methods
      • 24.2. Application Programming Interface (API) Usage
        • 24.2.1. Classify with Multiple Categories
    • 25. Packet Framework
      • 25.1. Design Objectives
      • 25.2. Overview
      • 25.3. Port Library Design
        • 25.3.1. Port Types
        • 25.3.2. Port Interface
      • 25.4. Table Library Design
        • 25.4.1. Table Types
        • 25.4.2. Table Interface
        • 25.4.3. Hash Table Design
      • 25.5. Pipeline Library Design
        • 25.5.1. Connectivity of Ports and Tables
        • 25.5.2. Port Actions
        • 25.5.3. Table Actions
      • 25.6. Multicore Scaling
        • 25.6.1. Shared Data Structures
      • 25.7. Interfacing with Accelerators
    • 26. Vhost Library
      • 26.1. Vhost API Overview
      • 26.2. Vhost Implementations
        • 26.2.1. Vhost-cuse implementation
        • 26.2.2. Vhost-user implementation
      • 26.3. Vhost supported vSwitch reference
    • 27. Port Hotplug Framework
      • 27.1. Overview
      • 27.2. Port Hotplug API overview
      • 27.3. Reference
      • 27.4. Limitations
    • 28. Source Organization
      • 28.1. Makefiles and Config
      • 28.2. Libraries
      • 28.3. Drivers
      • 28.4. Applications
    • 29. Development Kit Build System
      • 29.1. Building the Development Kit Binary
        • 29.1.1. Build Directory Concept
      • 29.2. Building External Applications
      • 29.3. Makefile Description
        • 29.3.1. General Rules For DPDK Makefiles
        • 29.3.2. Makefile Types
        • 29.3.3. Internally Generated Build Tools
        • 29.3.4. Useful Variables Provided by the Build System
        • 29.3.5. Variables that Can be Set/Overridden in a Makefile Only
        • 29.3.6. Variables that can be Set/Overridden by the User on the Command Line Only
        • 29.3.7. Variables that Can be Set/Overridden by the User in a Makefile or Command Line
    • 30. Development Kit Root Makefile Help
      • 30.1. Configuration Targets
      • 30.2. Build Targets
      • 30.3. Install Targets
      • 30.4. Test Targets
      • 30.5. Documentation Targets
      • 30.6. Deps Targets
      • 30.7. Misc Targets
      • 30.8. Other Useful Command-line Variables
      • 30.9. Make in a Build Directory
      • 30.10. Compiling for Debug
    • 31. Extending the DPDK
      • 31.1. Example: Adding a New Library libfoo
        • 31.1.1. Example: Using libfoo in the Test Application
    • 32. Building Your Own Application
      • 32.1. Compiling a Sample Application in the Development Kit Directory
      • 32.2. Build Your Own Application Outside the Development Kit
      • 32.3. Customizing Makefiles
        • 32.3.1. Application Makefile
        • 32.3.2. Library Makefile
        • 32.3.3. Customize Makefile Actions
    • 33. External Application/Library Makefile help
      • 33.1. Prerequisites
      • 33.2. Build Targets
      • 33.3. Help Targets
      • 33.4. Other Useful Command-line Variables
      • 33.5. Make from Another Directory
    • 34. Performance Optimization Guidelines
      • 34.1. Introduction
    • 35. Writing Efficient Code
      • 35.1. Memory
        • 35.1.1. Memory Copy: Do not Use libc in the Data Plane
        • 35.1.2. Memory Allocation
        • 35.1.3. Concurrent Access to the Same Memory Area
        • 35.1.4. NUMA
        • 35.1.5. Distribution Across Memory Channels
      • 35.2. Communication Between lcores
      • 35.3. PMD Driver
        • 35.3.1. Lower Packet Latency
      • 35.4. Locks and Atomic Operations
      • 35.5. Coding Considerations
        • 35.5.1. Inline Functions
        • 35.5.2. Branch Prediction
      • 35.6. Setting the Target CPU Type
    • 36. Profile Your Application
    • 37. Glossary
  • Network Interface Controller Drivers
    • 1. Overview of Networking Drivers
    • 2. BNX2X Poll Mode Driver
      • 2.1. Supported Features
      • 2.2. Non-supported Features
      • 2.3. Co-existence considerations
      • 2.4. Supported QLogic NICs
      • 2.5. Prerequisites
      • 2.6. Pre-Installation Configuration
        • 2.6.1. Config File Options
        • 2.6.2. Driver Compilation
      • 2.7. Linux
        • 2.7.1. Linux Installation
        • 2.7.2. Sample Application Notes
        • 2.7.3. SR-IOV: Prerequisites and sample Application Notes
    • 3. bnxt poll mode driver library
      • 3.1. Limitations
    • 4. CXGBE Poll Mode Driver
      • 4.1. Features
      • 4.2. Limitations
      • 4.3. Supported Chelsio T5 NICs
      • 4.4. Prerequisites
      • 4.5. Pre-Installation Configuration
        • 4.5.1. Config File Options
        • 4.5.2. Driver Compilation
      • 4.6. Linux
        • 4.6.1. Linux Installation
        • 4.6.2. Running testpmd
      • 4.7. FreeBSD
        • 4.7.1. FreeBSD Installation
        • 4.7.2. Running testpmd
      • 4.8. Sample Application Notes
        • 4.8.1. Enable/Disable Flow Control
        • 4.8.2. Jumbo Mode
    • 5. Driver for VM Emulated Devices
      • 5.1. Validated Hypervisors
      • 5.2. Recommended Guest Operating System in Virtual Machine
      • 5.3. Setting Up a KVM Virtual Machine
      • 5.4. Known Limitations of Emulated Devices
    • 6. ENA Poll Mode Driver
      • 6.1. Overview
      • 6.2. Management Interface
      • 6.3. Data Path Interface
      • 6.4. Configuration information
      • 6.5. Building DPDK
      • 6.6. Supported ENA adapters
      • 6.7. Supported Operating Systems
      • 6.8. Supported features
      • 6.9. Unsupported features
      • 6.10. Prerequisites
      • 6.11. Usage example
    • 7. ENIC Poll Mode Driver
      • 7.1. How to obtain ENIC PMD integrated DPDK
      • 7.2. Configuration information
      • 7.3. Limitations
      • 7.4. How to build the suite?
      • 7.5. Supported Cisco VIC adapters
      • 7.6. Supported Operating Systems
      • 7.7. Supported features
      • 7.8. Known bugs and Unsupported features in this release
      • 7.9. Prerequisites
      • 7.10. Additional Reference
      • 7.11. Contact Information
    • 8. FM10K Poll Mode Driver
      • 8.1. FTAG Based Forwarding of FM10K
      • 8.2. Vector PMD for FM10K
        • 8.2.1. RX Constraints
        • 8.2.2. TX Constraint
      • 8.3. Limitations
        • 8.3.1. Switch manager
        • 8.3.2. CRC striping
        • 8.3.3. Maximum packet length
        • 8.3.4. Statistic Polling Frequency
        • 8.3.5. Interrupt mode
    • 9. I40E Poll Mode Driver
      • 9.1. Features
      • 9.2. Prerequisites
      • 9.3. Pre-Installation Configuration
        • 9.3.1. Config File Options
        • 9.3.2. Driver Compilation
      • 9.4. Linux
        • 9.4.1. Running testpmd
        • 9.4.2. SR-IOV: Prerequisites and sample Application Notes
      • 9.5. Sample Application Notes
        • 9.5.1. Vlan filter
        • 9.5.2. Flow Director
        • 9.5.3. Floating VEB
    • 10. IXGBE Driver
      • 10.1. Vector PMD for IXGBE
        • 10.1.1. RX Constraints
        • 10.1.2. TX Constraint
        • 10.1.3. Sample Application Notes
      • 10.2. Malicious Driver Detection not Supported
      • 10.3. Statistics
    • 11. I40E/IXGBE/IGB Virtual Function Driver
      • 11.1. SR-IOV Mode Utilization in a DPDK Environment
        • 11.1.1. Physical and Virtual Function Infrastructure
        • 11.1.2. Validated Hypervisors
        • 11.1.3. Expected Guest Operating System in Virtual Machine
      • 11.2. Setting Up a KVM Virtual Machine Monitor
      • 11.3. DPDK SR-IOV PMD PF/VF Driver Usage Model
        • 11.3.1. Fast Host-based Packet Processing
      • 11.4. SR-IOV (PF/VF) Approach for Inter-VM Communication
    • 12. MLX4 poll mode driver library
      • 12.1. Implementation details
      • 12.2. Features
      • 12.3. Limitations
      • 12.4. Configuration
        • 12.4.1. Compilation options
        • 12.4.2. Environment variables
        • 12.4.3. Run-time configuration
        • 12.4.4. Kernel module parameters
      • 12.5. Prerequisites
        • 12.5.1. Getting Mellanox OFED
      • 12.6. Usage example
    • 13. MLX5 poll mode driver
      • 13.1. Implementation details
      • 13.2. Features
      • 13.3. Limitations
      • 13.4. Configuration
        • 13.4.1. Compilation options
        • 13.4.2. Environment variables
        • 13.4.3. Run-time configuration
      • 13.5. Prerequisites
        • 13.5.1. Getting Mellanox OFED
      • 13.6. Notes for testpmd
      • 13.7. Usage example
    • 14. NFP poll mode driver library
      • 14.1. Dependencies
      • 14.2. Building the software
      • 14.3. System configuration
    • 15. QEDE Poll Mode Driver
      • 15.1. Supported Features
      • 15.2. Non-supported Features
      • 15.3. Supported QLogic Adapters
      • 15.4. Prerequisites
        • 15.4.1. Performance note
        • 15.4.2. Config File Options
        • 15.4.3. Driver Compilation
        • 15.4.4. Sample Application Notes
        • 15.4.5. SR-IOV: Prerequisites and Sample Application Notes
    • 16. SZEDATA2 poll mode driver library
      • 16.1. Prerequisites
      • 16.2. Configuration
      • 16.3. Using the SZEDATA2 PMD
      • 16.4. Example of usage
    • 17. ThunderX NICVF Poll Mode Driver
      • 17.1. Features
      • 17.2. Supported ThunderX SoCs
      • 17.3. Prerequisites
      • 17.4. Pre-Installation Configuration
        • 17.4.1. Config File Options
        • 17.4.2. Driver Compilation
      • 17.5. Linux
        • 17.5.1. Running testpmd
        • 17.5.2. SR-IOV: Prerequisites and sample Application Notes
      • 17.6. Limitations
        • 17.6.1. CRC striping
        • 17.6.2. Maximum packet length
        • 17.6.3. Maximum packet segments
        • 17.6.4. Limited VFs
    • 18. Poll Mode Driver for Emulated Virtio NIC
      • 18.1. Virtio Implementation in DPDK
      • 18.2. Features and Limitations of virtio PMD
      • 18.3. Prerequisites
      • 18.4. Virtio with kni vhost Back End
      • 18.5. Virtio with qemu virtio Back End
      • 18.6. Virtio PMD Rx/Tx Callbacks
    • 19. Poll Mode Driver that wraps vhost library
      • 19.1. Vhost Implementation in DPDK
      • 19.2. Features and Limitations of vhost PMD
      • 19.3. Vhost PMD arguments
      • 19.4. Vhost PMD event handling
      • 19.5. Vhost PMD with testpmd application
    • 20. Poll Mode Driver for Paravirtual VMXNET3 NIC
      • 20.1. VMXNET3 Implementation in the DPDK
      • 20.2. Features and Limitations of VMXNET3 PMD
      • 20.3. Prerequisites
      • 20.4. VMXNET3 with a Native NIC Connected to a vSwitch
      • 20.5. VMXNET3 Chaining VMs Connected to a vSwitch
    • 21. Libpcap and Ring Based Poll Mode Drivers
      • 21.1. Using the Drivers from the EAL Command Line
        • 21.1.1. Libpcap-based PMD
        • 21.1.2. Rings-based PMD
        • 21.1.3. Using the Poll Mode Driver from an Application
  • Crypto Device Drivers
    • 1. Crypto Device Supported Functionality Matrices
    • 2. AESN-NI Multi Buffer Crytpo Poll Mode Driver
      • 2.1. Features
      • 2.2. Limitations
      • 2.3. Installation
      • 2.4. Initialization
    • 3. AES-NI GCM Crypto Poll Mode Driver
      • 3.1. Features
      • 3.2. Initialization
      • 3.3. Limitations
    • 4. KASUMI Crypto Poll Mode Driver
      • 4.1. Features
      • 4.2. Limitations
      • 4.3. Installation
      • 4.4. Initialization
    • 5. Null Crypto Poll Mode Driver
      • 5.1. Features
      • 5.2. Limitations
      • 5.3. Installation
      • 5.4. Initialization
    • 6. SNOW 3G Crypto Poll Mode Driver
      • 6.1. Features
      • 6.2. Limitations
      • 6.3. Installation
      • 6.4. Initialization
    • 7. Quick Assist Crypto Poll Mode Driver
      • 7.1. Features
      • 7.2. Limitations
      • 7.3. Installation
      • 7.4. Installation using 01.org QAT driver
      • 7.5. Installation using kernel.org driver
      • 7.6. Binding the available VFs to the DPDK UIO driver
  • Sample Applications User Guide
    • 1. Introduction
      • 1.1. Documentation Roadmap
    • 2. Command Line Sample Application
      • 2.1. Overview
      • 2.2. Compiling the Application
      • 2.3. Running the Application
      • 2.4. Explanation
        • 2.4.1. EAL Initialization and cmdline Start
        • 2.4.2. Defining a cmdline Context
    • 3. Ethtool Sample Application
      • 3.1. Compiling the Application
      • 3.2. Running the Application
      • 3.3. Using the application
      • 3.4. Explanation
        • 3.4.1. Packet Reflector
        • 3.4.2. Ethtool Shell
      • 3.5. Ethtool interface
    • 4. Exception Path Sample Application
      • 4.1. Overview
      • 4.2. Compiling the Application
      • 4.3. Running the Application
        • 4.3.1. Getting Statistics
      • 4.4. Explanation
        • 4.4.1. Initialization
        • 4.4.2. Packet Forwarding
        • 4.4.3. Managing TAP Interfaces and Bridges
    • 5. Hello World Sample Application
      • 5.1. Compiling the Application
      • 5.2. Running the Application
      • 5.3. Explanation
        • 5.3.1. EAL Initialization
        • 5.3.2. Starting Application Unit Lcores
    • 6. Basic Forwarding Sample Application
      • 6.1. Compiling the Application
      • 6.2. Running the Application
      • 6.3. Explanation
        • 6.3.1. The Main Function
        • 6.3.2. The Port Initialization Function
        • 6.3.3. The Lcores Main
    • 7. RX/TX Callbacks Sample Application
      • 7.1. Compiling the Application
      • 7.2. Running the Application
      • 7.3. Explanation
        • 7.3.1. The Main Function
        • 7.3.2. The Port Initialization Function
        • 7.3.3. The add_timestamps() Callback
        • 7.3.4. The calc_latency() Callback
    • 8. IP Fragmentation Sample Application
      • 8.1. Overview
      • 8.2. Building the Application
      • 8.3. Running the Application
    • 9. IPv4 Multicast Sample Application
      • 9.1. Overview
      • 9.2. Building the Application
      • 9.3. Running the Application
      • 9.4. Explanation
        • 9.4.1. Memory Pool Initialization
        • 9.4.2. Hash Initialization
        • 9.4.3. Forwarding
        • 9.4.4. Buffer Cloning
    • 10. IP Reassembly Sample Application
      • 10.1. Overview
      • 10.2. The Longest Prefix Match (LPM for IPv4, LPM6 for IPv6) table is used to store/lookup an outgoing port number, associated with that IPv4 address. Any unmatched packets are forwarded to the originating port.Compiling the Application
      • 10.3. Running the Application
      • 10.4. Explanation
        • 10.4.1. IPv4 Fragment Table Initialization
        • 10.4.2. Mempools Initialization
        • 10.4.3. Packet Reassembly and Forwarding
        • 10.4.4. Debug logging and Statistics Collection
    • 11. Kernel NIC Interface Sample Application
      • 11.1. Overview
      • 11.2. Compiling the Application
      • 11.3. Loading the Kernel Module
      • 11.4. Running the Application
      • 11.5. KNI Operations
      • 11.6. Explanation
        • 11.6.1. Initialization
        • 11.6.2. Packet Forwarding
        • 11.6.3. Callbacks for Kernel Requests
    • 12. Keep Alive Sample Application
      • 12.1. Overview
      • 12.2. Compiling the Application
      • 12.3. Running the Application
      • 12.4. Explanation
    • 13. L2 Forwarding with Crypto Sample Application
      • 13.1. Overview
      • 13.2. Compiling the Application
      • 13.3. Running the Application
      • 13.4. Explanation
        • 13.4.1. Crypto operation specification
        • 13.4.2. Crypto device initialization
        • 13.4.3. Session creation
        • 13.4.4. Crypto operation creation
        • 13.4.5. Crypto operation enqueuing/dequeuing
    • 14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with core load statistics.
      • 14.1. Overview
        • 14.1.1. Virtual Function Setup Instructions
      • 14.2. Compiling the Application
      • 14.3. Running the Application
      • 14.4. Explanation
        • 14.4.1. Command Line Arguments
        • 14.4.2. Mbuf Pool Initialization
        • 14.4.3. Driver Initialization
        • 14.4.4. RX Queue Initialization
        • 14.4.5. TX Queue Initialization
        • 14.4.6. Jobs statistics initialization
        • 14.4.7. Main loop
        • 14.4.8. Receive, Process and Transmit Packets
    • 15. L2 Forwarding Sample Application (in Real and Virtualized Environments)
      • 15.1. Overview
        • 15.1.1. Virtual Function Setup Instructions
      • 15.2. Compiling the Application
      • 15.3. Running the Application
      • 15.4. Explanation
        • 15.4.1. Command Line Arguments
        • 15.4.2. Mbuf Pool Initialization
        • 15.4.3. Driver Initialization
        • 15.4.4. RX Queue Initialization
        • 15.4.5. TX Queue Initialization
        • 15.4.6. Receive, Process and Transmit Packets
    • 16. L2 Forwarding Sample Application with Cache Allocation Technology (CAT)
      • 16.1. Compiling the Application
      • 16.2. Running the Application
      • 16.3. Explanation
        • 16.3.1. The Main Function
    • 17. L3 Forwarding Sample Application
      • 17.1. Overview
      • 17.2. Compiling the Application
      • 17.3. Running the Application
      • 17.4. Explanation
        • 17.4.1. Hash Initialization
        • 17.4.2. LPM Initialization
        • 17.4.3. Packet Forwarding for Hash-based Lookups
        • 17.4.4. Packet Forwarding for LPM-based Lookups
    • 18. L3 Forwarding with Power Management Sample Application
      • 18.1. Introduction
      • 18.2. Overview
      • 18.3. Compiling the Application
      • 18.4. Running the Application
      • 18.5. Explanation
        • 18.5.1. Power Library Initialization
        • 18.5.2. Monitoring Loads of Rx Queues
        • 18.5.3. P-State Heuristic Algorithm
        • 18.5.4. C-State Heuristic Algorithm
    • 19. L3 Forwarding with Access Control Sample Application
      • 19.1. Overview
        • 19.1.1. Tuple Packet Syntax
        • 19.1.2. Access Rule Syntax
        • 19.1.3. ACL and Route Rules
        • 19.1.4. Rules File Example
        • 19.1.5. Application Phases
      • 19.2. Compiling the Application
      • 19.3. Running the Application
      • 19.4. Explanation
        • 19.4.1. Parse Rules from File
        • 19.4.2. Setting Up the ACL Context
    • 20. L3 Forwarding in a Virtualization Environment Sample Application
      • 20.1. Overview
      • 20.2. Compiling the Application
      • 20.3. Running the Application
      • 20.4. Explanation
    • 21. Link Status Interrupt Sample Application
      • 21.1. Overview
      • 21.2. Compiling the Application
      • 21.3. Running the Application
      • 21.4. Explanation
        • 21.4.1. Command Line Arguments
        • 21.4.2. Mbuf Pool Initialization
        • 21.4.3. Driver Initialization
        • 21.4.4. Interrupt Callback Registration
        • 21.4.5. RX Queue Initialization
        • 21.4.6. TX Queue Initialization
        • 21.4.7. Receive, Process and Transmit Packets
    • 22. Load Balancer Sample Application
      • 22.1. Overview
        • 22.1.1. I/O RX Logical Cores
        • 22.1.2. I/O TX Logical Cores
        • 22.1.3. Worker Logical Cores
      • 22.2. Compiling the Application
      • 22.3. Running the Application
      • 22.4. Explanation
        • 22.4.1. Application Configuration
        • 22.4.2. NUMA Support
    • 23. Multi-process Sample Application
      • 23.1. Example Applications
        • 23.1.1. Building the Sample Applications
        • 23.1.2. Basic Multi-process Example
        • 23.1.3. Symmetric Multi-process Example
        • 23.1.4. Client-Server Multi-process Example
        • 23.1.5. Master-slave Multi-process Example
    • 24. QoS Metering Sample Application
      • 24.1. Overview
      • 24.2. Compiling the Application
      • 24.3. Running the Application
      • 24.4. Explanation
    • 25. QoS Scheduler Sample Application
      • 25.1. Overview
      • 25.2. Compiling the Application
      • 25.3. Running the Application
        • 25.3.1. Interactive mode
        • 25.3.2. Example
      • 25.4. Explanation
    • 26. Intel® QuickAssist Technology Sample Application
      • 26.1. Overview
        • 26.1.1. Setup
      • 26.2. Building the Application
      • 26.3. Running the Application
        • 26.3.1. Intel® QuickAssist Technology Configuration Files
        • 26.3.2. Traffic Generator Setup and Application Startup
    • 27. Quota and Watermark Sample Application
      • 27.1. Overview
      • 27.2. Compiling the Application
      • 27.3. Running the Application
        • 27.3.1. Running the Core Application
        • 27.3.2. Running the Control Application
      • 27.4. Code Overview
        • 27.4.1. Core Application - qw
        • 27.4.2. Control Application - qwctl
    • 28. Timer Sample Application
      • 28.1. Compiling the Application
      • 28.2. Running the Application
      • 28.3. Explanation
        • 28.3.1. Initialization and Main Loop
        • 28.3.2. Managing Timers
    • 29. Packet Ordering Application
      • 29.1. Overview
      • 29.2. Compiling the Application
      • 29.3. Running the Application
        • 29.3.1. Application Command Line
    • 30. VMDQ and DCB Forwarding Sample Application
      • 30.1. Overview
      • 30.2. Compiling the Application
      • 30.3. Running the Application
      • 30.4. Explanation
        • 30.4.1. Initialization
        • 30.4.2. Statistics Display
    • 31. Vhost Sample Application
      • 31.1. Background
      • 31.2. Sample Code Overview
      • 31.3. Supported Distributions
      • 31.4. Prerequisites
        • 31.4.1. Installing Packages on the Host(vhost cuse required)
        • 31.4.2. QEMU simulator
        • 31.4.3. Setting up the Execution Environment
        • 31.4.4. Setting up the Guest Execution Environment
      • 31.5. Compiling the Sample Code
      • 31.6. Running the Sample Code
        • 31.6.1. Parameters
      • 31.7. Running the Virtual Machine (QEMU)
        • 31.7.1. Redirecting QEMU to vhost-net Sample Code(vhost cuse)
        • 31.7.2. Mapping the Virtual Machine’s Memory
        • 31.7.3. QEMU Wrapper Script
        • 31.7.4. Libvirt Integration
        • 31.7.5. Common Issues
      • 31.8. Running DPDK in the Virtual Machine
        • 31.8.1. Testpmd MAC Forwarding
        • 31.8.2. Running Testpmd
      • 31.9. Passing Traffic to the Virtual Machine Device
      • 31.10. Running virtio_user with vhost-switch
    • 32. Netmap Compatibility Sample Application
      • 32.1. Introduction
      • 32.2. Available APIs
      • 32.3. Caveats
      • 32.4. Porting Netmap Applications
      • 32.5. Compiling the “bridge” Sample Application
      • 32.6. Running the “bridge” Sample Application
    • 33. Internet Protocol (IP) Pipeline Application
      • 33.1. Application overview
      • 33.2. Design goals
        • 33.2.1. Rapid development
        • 33.2.2. Flexibility
        • 33.2.3. Performance
        • 33.2.4. Debug capabilities
      • 33.3. Running the application
      • 33.4. Application stages
        • 33.4.1. Configuration
        • 33.4.2. Configuration checking
        • 33.4.3. Initialization
        • 33.4.4. Run-time
      • 33.5. Configuration file syntax
        • 33.5.1. Syntax overview
        • 33.5.2. Application resources present in the configuration file
        • 33.5.3. Rules to parse the configuration file
        • 33.5.4. PIPELINE section
        • 33.5.5. MEMPOOL section
        • 33.5.6. LINK section
        • 33.5.7. RXQ section
        • 33.5.8. TXQ section
        • 33.5.9. SWQ section
        • 33.5.10. TM section
        • 33.5.11. KNI section
        • 33.5.12. SOURCE section
        • 33.5.13. SINK section
        • 33.5.14. MSGQ section
        • 33.5.15. EAL section
      • 33.6. Library of pipeline types
        • 33.6.1. Pipeline module
        • 33.6.2. List of pipeline types
      • 33.7. Command Line Interface (CLI)
        • 33.7.1. Global CLI commands
        • 33.7.2. CLI commands for link configuration
        • 33.7.3. CLI commands common for all pipeline types
        • 33.7.4. Pipeline type specific CLI commands
    • 34. Test Pipeline Application
      • 34.1. Overview
      • 34.2. Compiling the Application
      • 34.3. Running the Application
        • 34.3.1. Application Command Line
        • 34.3.2. Table Types and Behavior
        • 34.3.3. Input Traffic
    • 35. Distributor Sample Application
      • 35.1. Overview
      • 35.2. Compiling the Application
      • 35.3. Running the Application
      • 35.4. Explanation
      • 35.5. Debug Logging Support
      • 35.6. Statistics
      • 35.7. Application Initialization
    • 36. VM Power Management Application
      • 36.1. Introduction
      • 36.2. Overview
        • 36.2.1. Performance Considerations
      • 36.3. Configuration
        • 36.3.1. BIOS
        • 36.3.2. Host Operating System
        • 36.3.3. Hypervisor Channel Configuration
      • 36.4. Compiling and Running the Host Application
        • 36.4.1. Compiling
        • 36.4.2. Running
      • 36.5. Compiling and Running the Guest Applications
        • 36.5.1. Compiling
        • 36.5.2. Running
    • 37. TEP termination Sample Application
      • 37.1. Background
      • 37.2. Sample Code Overview
      • 37.3. Supported Distributions
      • 37.4. Prerequisites
      • 37.5. Compiling the Sample Code
      • 37.6. Running the Sample Code
        • 37.6.1. Parameters
      • 37.7. Running the Virtual Machine (QEMU)
      • 37.8. Running DPDK in the Virtual Machine
      • 37.9. Passing Traffic to the Virtual Machine Device
    • 38. PTP Client Sample Application
      • 38.1. Limitations
      • 38.2. How the Application Works
      • 38.3. Compiling the Application
      • 38.4. Running the Application
      • 38.5. Code Explanation
        • 38.5.1. The Main Function
        • 38.5.2. The Lcores Main
        • 38.5.3. PTP parsing
    • 39. Performance Thread Sample Application
      • 39.1. Overview
      • 39.2. Compiling the Application
      • 39.3. Running the Application
        • 39.3.1. Running with L-threads
        • 39.3.2. Running with EAL threads
        • 39.3.3. Examples
      • 39.4. Explanation
        • 39.4.1. Mode of operation with EAL threads
        • 39.4.2. Mode of operation with L-threads
        • 39.4.3. CPU load statistics
      • 39.5. The L-thread subsystem
        • 39.5.1. Comparison between L-threads and POSIX pthreads
        • 39.5.2. Constraints and performance implications when using L-threads
        • 39.5.3. Porting legacy code to run on L-threads
        • 39.5.4. Pthread shim
        • 39.5.5. L-thread Diagnostics
    • 40. IPsec Security Gateway Sample Application
      • 40.1. Overview
      • 40.2. Constraints
      • 40.3. Compiling the Application
      • 40.4. Running the Application
      • 40.5. Configurations
        • 40.5.1. Security Policy Initialization
        • 40.5.2. Security Association Initialization
        • 40.5.3. Routing Initialization
  • Tool User Guides
    • 1. dpdk-procinfo Application
      • 1.1. Running the Application
        • 1.1.1. Parameters
    • 2. dpdk-pdump Application
      • 2.1. Running the Application
        • 2.1.1. The --pdump parameters
      • 2.2. Example
    • 3. dpdk-pmdinfo Application
      • 3.1. Running the Application
    • 4. dpdk-devbind Application
      • 4.1. Running the Application
      • 4.2. OPTIONS
      • 4.3. Examples
  • Testpmd Application User Guide
    • 1. Introduction
    • 2. Compiling the Application
    • 3. Running the Application
      • 3.1. EAL Command-line Options
      • 3.2. Testpmd Command-line Options
    • 4. Testpmd Runtime Functions
      • 4.1. Help Functions
      • 4.2. Control Functions
        • 4.2.1. start
        • 4.2.2. start tx_first
        • 4.2.3. stop
        • 4.2.4. quit
      • 4.3. Display Functions
        • 4.3.1. show port
        • 4.3.2. show port rss reta
        • 4.3.3. show port rss-hash
        • 4.3.4. clear port
        • 4.3.5. show (rxq|txq)
        • 4.3.6. show config
        • 4.3.7. set fwd
        • 4.3.8. read rxd
        • 4.3.9. read txd
      • 4.4. Configuration Functions
        • 4.4.1. set default
        • 4.4.2. set verbose
        • 4.4.3. set nbport
        • 4.4.4. set nbcore
        • 4.4.5. set coremask
        • 4.4.6. set portmask
        • 4.4.7. set burst
        • 4.4.8. set txpkts
        • 4.4.9. set txsplit
        • 4.4.10. set corelist
        • 4.4.11. set portlist
        • 4.4.12. vlan set strip
        • 4.4.13. vlan set stripq
        • 4.4.14. vlan set filter
        • 4.4.15. vlan set qinq
        • 4.4.16. vlan set tpid
        • 4.4.17. rx_vlan add
        • 4.4.18. rx_vlan rm
        • 4.4.19. rx_vlan add (for VF)
        • 4.4.20. rx_vlan rm (for VF)
        • 4.4.21. tunnel_filter add
        • 4.4.22. tunnel_filter remove
        • 4.4.23. rx_vxlan_port add
        • 4.4.24. rx_vxlan_port remove
        • 4.4.25. tx_vlan set
        • 4.4.26. tx_vlan set pvid
        • 4.4.27. tx_vlan reset
        • 4.4.28. csum set
        • 4.4.29. csum parse-tunnel
        • 4.4.30. csum show
        • 4.4.31. tso set
        • 4.4.32. tso show
        • 4.4.33. mac_addr add
        • 4.4.34. mac_addr remove
        • 4.4.35. mac_addr add(for VF)
        • 4.4.36. set port-uta
        • 4.4.37. set promisc
        • 4.4.38. set allmulti
        • 4.4.39. set flow_ctrl rx
        • 4.4.40. set pfc_ctrl rx
        • 4.4.41. set stat_qmap
        • 4.4.42. set port - rx/tx (for VF)
        • 4.4.43. set port - mac address filter (for VF)
        • 4.4.44. set port - rx mode(for VF)
        • 4.4.45. set port - tx_rate (for Queue)
        • 4.4.46. set port - tx_rate (for VF)
        • 4.4.47. set port - mirror rule
        • 4.4.48. reset port - mirror rule
        • 4.4.49. set flush_rx
        • 4.4.50. set bypass mode
        • 4.4.51. set bypass event
        • 4.4.52. set bypass timeout
        • 4.4.53. show bypass config
        • 4.4.54. set link up
        • 4.4.55. set link down
        • 4.4.56. E-tag set
      • 4.5. Port Functions
        • 4.5.1. port attach
        • 4.5.2. port detach
        • 4.5.3. port start
        • 4.5.4. port stop
        • 4.5.5. port close
        • 4.5.6. port start/stop queue
        • 4.5.7. port config - speed
        • 4.5.8. port config - queues/descriptors
        • 4.5.9. port config - max-pkt-len
        • 4.5.10. port config - CRC Strip
        • 4.5.11. port config - scatter
        • 4.5.12. port config - TX queue flags
        • 4.5.13. port config - RX Checksum
        • 4.5.14. port config - VLAN
        • 4.5.15. port config - VLAN filter
        • 4.5.16. port config - VLAN strip
        • 4.5.17. port config - VLAN extend
        • 4.5.18. port config - Drop Packets
        • 4.5.19. port config - RSS
        • 4.5.20. port config - RSS Reta
        • 4.5.21. port config - DCB
        • 4.5.22. port config - Burst
        • 4.5.23. port config - Threshold
        • 4.5.24. port config - E-tag
      • 4.6. Link Bonding Functions
        • 4.6.1. create bonded device
        • 4.6.2. add bonding slave
        • 4.6.3. remove bonding slave
        • 4.6.4. set bonding mode
        • 4.6.5. set bonding primary
        • 4.6.6. set bonding mac
        • 4.6.7. set bonding xmit_balance_policy
        • 4.6.8. set bonding mon_period
        • 4.6.9. show bonding config
      • 4.7. Register Functions
        • 4.7.1. read reg
        • 4.7.2. read regfield
        • 4.7.3. read regbit
        • 4.7.4. write reg
        • 4.7.5. write regfield
        • 4.7.6. write regbit
      • 4.8. Filter Functions
        • 4.8.1. ethertype_filter
        • 4.8.2. 2tuple_filter
        • 4.8.3. 5tuple_filter
        • 4.8.4. syn_filter
        • 4.8.5. flex_filter
        • 4.8.6. flow_director_filter
        • 4.8.7. flush_flow_director
        • 4.8.8. flow_director_mask
        • 4.8.9. flow_director_flex_mask
        • 4.8.10. flow_director_flex_payload
        • 4.8.11. get_sym_hash_ena_per_port
        • 4.8.12. set_sym_hash_ena_per_port
        • 4.8.13. get_hash_global_config
        • 4.8.14. set_hash_global_config
        • 4.8.15. set_hash_input_set
        • 4.8.16. set_fdir_input_set
        • 4.8.17. global_config
  • FAQ
    • 1. What does “EAL: map_all_hugepages(): open failed: Permission denied Cannot init memory” mean?
    • 2. If I want to change the number of TLB Hugepages allocated, how do I remove the original pages allocated?
    • 3. If I execute “l2fwd -c f -m 64 -n 3 – -p 3”, I get the following output, indicating that there are no socket 0 hugepages to allocate the mbuf and ring structures to?
    • 4. I am running a 32-bit DPDK application on a NUMA system, and sometimes the application initializes fine but cannot allocate memory. Why is that happening?
    • 5. On application startup, there is a lot of EAL information printed. Is there any way to reduce this?
    • 6. How can I tune my network application to achieve lower latency?
    • 7. Without NUMA enabled, my network throughput is low, why?
    • 8. I am getting errors about not being able to open files. Why?
    • 9. VF driver for IXGBE devices cannot be initialized
    • 10. Is it safe to add an entry to the hash table while running?
    • 11. What is the purpose of setting iommu=pt?
    • 12. When trying to send packets from an application to itself, meaning smac==dmac, using Intel(R) 82599 VF packets are lost.
    • 13. Can I split packet RX to use DPDK and have an application’s higher order functions continue using Linux pthread?
    • 14. Is it possible to exchange data between DPDK processes and regular userspace processes via some shared memory or IPC mechanism?
    • 15. Can the multiple queues in Intel(R) I350 be used with DPDK?
    • 16. How can hugepage-backed memory be shared among multiple processes?
  • How To User Guides
    • 1. Live Migration of VM with SR-IOV VF
      • 1.1. Overview
      • 1.2. Test Setup
      • 1.3. Live Migration steps
        • 1.3.1. On host_server_1: Terminal 1
        • 1.3.2. On host_server_1: Terminal 2
        • 1.3.3. On host_server_1: Terminal 1
        • 1.3.4. On host_server_1: Terminal 2
        • 1.3.5. On host_server_1: Terminal 1
        • 1.3.6. On host_server_2: Terminal 1
        • 1.3.7. On host_server_2: Terminal 2
        • 1.3.8. On host_server_1: Terminal 2
        • 1.3.9. On host_server_2: Terminal 1
        • 1.3.10. On host_server_2: Terminal 2
        • 1.3.11. On host_server_2: Terminal 1
      • 1.4. Sample host scripts
        • 1.4.1. setup_vf_on_212_46.sh
        • 1.4.2. vm_virtio_vf_one_212_46.sh
        • 1.4.3. setup_bridge_on_212_46.sh
        • 1.4.4. connect_to_qemu_mon_on_host.sh
        • 1.4.5. setup_vf_on_212_131.sh
        • 1.4.6. vm_virtio_one_migrate.sh
        • 1.4.7. setup_bridge_on_212_131.sh
      • 1.5. Sample VM scripts
        • 1.5.1. setup_dpdk_in_vm.sh
        • 1.5.2. run_testpmd_bonding_in_vm.sh
      • 1.6. Sample switch configuration
        • 1.6.1. On Switch: Terminal 1
        • 1.6.2. On Switch: Terminal 2
        • 1.6.3. On Switch: Terminal 1
        • 1.6.4. Sample switch configuration script
    • 2. Live Migration of VM with Virtio on host running vhost_user
      • 2.1. Overview
      • 2.2. Test Setup
      • 2.3. Live Migration steps
        • 2.3.1. On host_server_1: Terminal 1
        • 2.3.2. On host_server_1: Terminal 2
        • 2.3.3. On host_server_1: Terminal 3
        • 2.3.4. On host_server_1: Terminal 1
        • 2.3.5. On host_server_1: Terminal 4
        • 2.3.6. On host_server_1: Terminal 1
        • 2.3.7. On host_server_2: Terminal 1
        • 2.3.8. On host_server_2: Terminal 2
        • 2.3.9. On host_server_2: Terminal 3
        • 2.3.10. On host_server_2: Terminal 1
        • 2.3.11. On host_server_2: Terminal 4
        • 2.3.12. On host_server_1: Terminal 4
        • 2.3.13. On host_server_2: Terminal 1
        • 2.3.14. On host_server_2: Terminal 4
        • 2.3.15. On host_server_2: Terminal 1
      • 2.4. Sample host scripts
        • 2.4.1. reset_vf_on_212_46.sh
        • 2.4.2. vm_virtio_vhost_user.sh
        • 2.4.3. connect_to_qemu_mon_on_host.sh
        • 2.4.4. reset_vf_on_212_131.sh
        • 2.4.5. vm_virtio_vhost_user_migrate.sh
      • 2.5. Sample VM scripts
        • 2.5.1. setup_dpdk_virtio_in_vm.sh
        • 2.5.2. run_testpmd_in_vm.sh
    • 3. Flow Bifurcation How-to Guide
      • 3.1. Using Flow Bifurcation on IXGBE in Linux
      • 3.2. Using Flow Bifurcation on I40E in Linux
  • Release Notes
    • 1. Description of Release
    • 2. DPDK Release 16.07
      • 2.1. New Features
      • 2.2. Resolved Issues
        • 2.2.1. EAL
        • 2.2.2. Drivers
        • 2.2.3. Libraries
        • 2.2.4. Examples
        • 2.2.5. Other
      • 2.3. Known Issues
      • 2.4. API Changes
      • 2.5. ABI Changes
      • 2.6. Shared Library Versions
      • 2.7. Tested Platforms
      • 2.8. Tested NICs
      • 2.9. Tested OSes
    • 3. DPDK Release 16.04
      • 3.1. New Features
      • 3.2. Resolved Issues
        • 3.2.1. Drivers
        • 3.2.2. Libraries
        • 3.2.3. Examples
      • 3.3. API Changes
      • 3.4. ABI Changes
      • 3.5. Shared Library Versions
      • 3.6. Tested Platforms
      • 3.7. Tested NICs
    • 4. DPDK Release 2.2
      • 4.1. New Features
      • 4.2. Resolved Issues
        • 4.2.1. EAL
        • 4.2.2. Drivers
        • 4.2.3. Libraries
        • 4.2.4. Examples
        • 4.2.5. Other
      • 4.3. Known Issues
      • 4.4. API Changes
      • 4.5. ABI Changes
      • 4.6. Shared Library Versions
    • 5. DPDK Release 2.1
      • 5.1. New Features
      • 5.2. Resolved Issues
      • 5.3. Known Issues
      • 5.4. API Changes
      • 5.5. ABI Changes
    • 6. DPDK Release 2.0
      • 6.1. New Features
    • 7. DPDK Release 1.8
      • 7.1. New Features
    • 8. Supported Operating Systems
    • 9. Known Issues and Limitations in Legacy Releases
      • 9.1. Unit Test for Link Bonding may fail at test_tlb_tx_burst()
      • 9.2. Pause Frame Forwarding does not work properly on igb
      • 9.3. In packets provided by the PMD, some flags are missing
      • 9.4. The rte_malloc library is not fully implemented
      • 9.5. HPET reading is slow
      • 9.6. HPET timers do not work on the Osage customer reference platform
      • 9.7. Not all variants of supported NIC types have been used in testing
      • 9.8. Multi-process sample app requires exact memory mapping
      • 9.9. Packets are not sent by the 1 GbE/10 GbE SR-IOV driver when the source MAC is not the MAC assigned to the VF NIC
      • 9.10. SR-IOV drivers do not fully implement the rte_ethdev API
      • 9.11. PMD does not work with –no-huge EAL command line parameter
      • 9.12. Some hardware off-load functions are not supported by the VF Driver
      • 9.13. Kernel crash on IGB port unbinding
      • 9.14. Twinpond and Ironpond NICs do not report link status correctly
      • 9.15. Discrepancies between statistics reported by different NICs
      • 9.16. Error reported opening files on DPDK initialization
      • 9.17. Intel® QuickAssist Technology sample application does not work on a 32-bit OS on Shumway
      • 9.18. Differences in how different Intel NICs handle maximum packet length for jumbo frame
      • 9.19. Binding PCI devices to igb_uio fails on Linux kernel 3.9 when more than one device is used
      • 9.20. GCC might generate Intel® AVX instructions for processors without Intel® AVX support
      • 9.21. Ethertype filter could receive other packets (non-assigned) in Niantic
      • 9.22. Cannot set link speed on Intel® 40G Ethernet controller
      • 9.23. Devices bound to igb_uio with VT-d enabled do not work on Linux kernel 3.15-3.17
      • 9.24. VM power manager may not work on systems with more than 64 cores
      • 9.25. DPDK may not build on some Intel CPUs using clang < 3.7.0
      • 9.26. The last EAL argument is replaced by the program name in argv[]
      • 9.27. I40e VF may not receive packets in the promiscuous mode
    • 10. ABI and API Deprecation
      • 10.1. Deprecation Notices
  • Contributor’s Guidelines
    • 1. DPDK Coding Style
      • 1.1. Description
      • 1.2. General Guidelines
      • 1.3. C Comment Style
        • 1.3.1. Usual Comments
        • 1.3.2. License Header
      • 1.4. C Preprocessor Directives
        • 1.4.1. Header Includes
        • 1.4.2. Header File Guards
        • 1.4.3. Macros
        • 1.4.4. Conditional Compilation
      • 1.5. C Types
        • 1.5.1. Integers
        • 1.5.2. Enumerations
        • 1.5.3. Bitfields
        • 1.5.4. Variable Declarations
        • 1.5.5. Structure Declarations
        • 1.5.6. Queues
        • 1.5.7. Typedefs
      • 1.6. C Indentation
        • 1.6.1. General
        • 1.6.2. Control Statements and Loops
        • 1.6.3. Function Calls
        • 1.6.4. Operators
        • 1.6.5. Exit
        • 1.6.6. Local Variables
        • 1.6.7. Casts and sizeof
      • 1.7. C Function Definition, Declaration and Use
        • 1.7.1. Prototypes
        • 1.7.2. Definitions
      • 1.8. C Statement Style and Conventions
        • 1.8.1. NULL Pointers
        • 1.8.2. Return Value
        • 1.8.3. Logging and Errors
        • 1.8.4. Branch Prediction
        • 1.8.5. Static Variables and Functions
        • 1.8.6. Const Attribute
        • 1.8.7. Inline ASM in C code
        • 1.8.8. Control Statements
      • 1.9. Python Code
    • 2. Design
      • 2.1. Environment or Architecture-specific Sources
        • 2.1.1. Per Architecture Sources
        • 2.1.2. Per Execution Environment Sources
      • 2.2. Library Statistics
        • 2.2.1. Description
        • 2.2.2. Mechanism to allow the application to turn library statistics on and off
        • 2.2.3. Prevention of ABI changes due to library statistics support
        • 2.2.4. Motivation to allow the application to turn library statistics on and off
    • 3. Managing ABI updates
      • 3.1. Description
      • 3.2. General Guidelines
      • 3.3. What is an ABI
      • 3.4. The DPDK ABI policy
      • 3.5. Examples of Deprecation Notices
      • 3.6. Versioning Macros
      • 3.7. Examples of ABI Macro use
        • 3.7.1. Updating a public API
        • 3.7.2. Deprecating part of a public API
        • 3.7.3. Deprecating an entire ABI version
      • 3.8. Running the ABI Validator
    • 4. DPDK Documentation Guidelines
      • 4.1. Structure of the Documentation
      • 4.2. Role of the Documentation
      • 4.3. Building the Documentation
        • 4.3.1. Dependencies
        • 4.3.2. Build commands
      • 4.4. Document Guidelines
      • 4.5. RST Guidelines
        • 4.5.1. Line Length
        • 4.5.2. Whitespace
        • 4.5.3. Section Headers
        • 4.5.4. Lists
        • 4.5.5. Code and Literal block sections
        • 4.5.6. Images
        • 4.5.7. Tables
        • 4.5.8. Hyperlinks
      • 4.6. Doxygen Guidelines
    • 5. Contributing Code to DPDK
      • 5.1. The DPDK Development Process
      • 5.2. Getting the Source Code
      • 5.3. Make your Changes
      • 5.4. Commit Messages: Subject Line
      • 5.5. Commit Messages: Body
      • 5.6. Creating Patches
      • 5.7. Checking the Patches
      • 5.8. Checking Compilation
      • 5.9. Sending Patches
      • 5.10. The Review Process
    • 6. Patch Cheatsheet
 
Data Plane Development Kit
  • Docs »
  • Programmer’s Guide »
  • 10. Link Bonding Poll Mode Driver Library
  • View page source

10. Link Bonding Poll Mode Driver Library

In addition to Poll Mode Drivers (PMDs) for physical and virtual hardware, DPDK also includes a pure-software library that allows physical PMD’s to be bonded together to create a single logical PMD.

Fig. 10.1 Bonded PMDs

The Link Bonding PMD library(librte_pmd_bond) supports bonding of groups of rte_eth_dev ports of the same speed and duplex to provide similar the capabilities to that found in Linux bonding driver to allow the aggregation of multiple (slave) NICs into a single logical interface between a server and a switch. The new bonded PMD will then process these interfaces based on the mode of operation specified to provide support for features such as redundant links, fault tolerance and/or load balancing.

The librte_pmd_bond library exports a C API which provides an API for the creation of bonded devices as well as the configuration and management of the bonded device and its slave devices.

Note

The Link Bonding PMD Library is enabled by default in the build configuration files, the library can be disabled by setting CONFIG_RTE_LIBRTE_PMD_BOND=n and recompiling the DPDK.

10.1. Link Bonding Modes Overview

Currently the Link Bonding PMD library supports following modes of operation:

  • Round-Robin (Mode 0):

Fig. 10.2 Round-Robin (Mode 0)

This mode provides load balancing and fault tolerance by transmission of packets in sequential order from the first available slave device through the last. Packets are bulk dequeued from devices then serviced in a round-robin manner. This mode does not guarantee in order reception of packets and down stream should be able to handle out of order packets.
  • Active Backup (Mode 1):

Fig. 10.3 Active Backup (Mode 1)

In this mode only one slave in the bond is active at any time, a different slave becomes active if, and only if, the primary active slave fails, thereby providing fault tolerance to slave failure. The single logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid confusing the network switch.
  • Balance XOR (Mode 2):

Fig. 10.4 Balance XOR (Mode 2)

This mode provides transmit load balancing (based on the selected transmission policy) and fault tolerance. The default policy (layer2) uses a simple calculation based on the packet flow source and destination MAC addresses as well as the number of active slaves available to the bonded device to classify the packet to a specific slave to transmit on. Alternate transmission policies supported are layer 2+3, this takes the IP source and destination addresses into the calculation of the transmit slave port and the final supported policy is layer 3+4, this uses IP source and destination addresses as well as the TCP/UDP source and destination port.

Note

The coloring differences of the packets are used to identify different flow classification calculated by the selected transmit policy

  • Broadcast (Mode 3):

Fig. 10.5 Broadcast (Mode 3)

This mode provides fault tolerance by transmission of packets on all slave ports.
  • Link Aggregation 802.3AD (Mode 4):

Fig. 10.6 Link Aggregation 802.3AD (Mode 4)

This mode provides dynamic link aggregation according to the 802.3ad specification. It negotiates and monitors aggregation groups that share the same speed and duplex settings using the selected balance transmit policy for balancing outgoing traffic.

DPDK implementation of this mode provide some additional requirements of the application.

  1. It needs to call rte_eth_tx_burst and rte_eth_rx_burst with intervals period of less than 100ms.
  2. Calls to rte_eth_tx_burst must have a buffer size of at least 2xN, where N is the number of slaves. This is a space required for LACP frames. Additionally LACP packets are included in the statistics, but they are not returned to the application.
  • Transmit Load Balancing (Mode 5):

Fig. 10.7 Transmit Load Balancing (Mode 5)

This mode provides an adaptive transmit load balancing. It dynamically changes the transmitting slave, according to the computed load. Statistics are collected in 100ms intervals and scheduled every 10ms.

10.2. Implementation Details

The librte_pmd_bond bonded device are compatible with the Ethernet device API exported by the Ethernet PMDs described in the DPDK API Reference.

The Link Bonding Library supports the creation of bonded devices at application startup time during EAL initialization using the --vdev option as well as programmatically via the C API rte_eth_bond_create function.

Bonded devices support the dynamical addition and removal of slave devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.

After a slave device is added to a bonded device slave is stopped using rte_eth_dev_stop and then reconfigured using rte_eth_dev_configure the RX and TX queues are also reconfigured using rte_eth_tx_queue_setup / rte_eth_rx_queue_setup with the parameters use to configure the bonding device. If RSS is enabled for bonding device, this mode is also enabled on new slave and configured as well.

Setting up multi-queue mode for bonding device to RSS, makes it fully RSS-capable, so all slaves are synchronized with its configuration. This mode is intended to provide RSS configuration on slaves transparent for client application implementation.

Bonding device stores its own version of RSS settings i.e. RETA, RSS hash function and RSS key, used to set up its slaves. That let to define the meaning of RSS configuration of bonding device as desired configuration of whole bonding (as one unit), without pointing any of slave inside. It is required to ensure consistency and made it more error-proof.

RSS hash function set for bonding device, is a maximal set of RSS hash functions supported by all bonded slaves. RETA size is a GCD of all its RETA’s sizes, so it can be easily used as a pattern providing expected behavior, even if slave RETAs’ sizes are different. If RSS Key is not set for bonded device, it’s not changed on the slaves and default key for device is used.

All settings are managed through the bonding port API and always are propagated in one direction (from bonding to slaves).

10.2.1. Link Status Change Interrupts / Polling

Link bonding devices support the registration of a link status change callback, using the rte_eth_dev_callback_register API, this will be called when the status of the bonding device changes. For example in the case of a bonding device which has 3 slaves, the link status will change to up when one slave becomes active or change to down when all slaves become inactive. There is no callback notification when a single slave changes state and the previous conditions are not met. If a user wishes to monitor individual slaves then they must register callbacks with that slave directly.

The link bonding library also supports devices which do not implement link status change interrupts, this is achieved by polling the devices link status at a defined period which is set using the rte_eth_bond_link_monitoring_set API, the default polling interval is 10ms. When a device is added as a slave to a bonding device it is determined using the RTE_PCI_DRV_INTR_LSC flag whether the device supports interrupts or whether the link status should be monitored by polling it.

10.2.2. Requirements / Limitations

The current implementation only supports devices that support the same speed and duplex to be added as a slaves to the same bonded device. The bonded device inherits these attributes from the first active slave added to the bonded device and then all further slaves added to the bonded device must support these parameters.

A bonding device must have a minimum of one slave before the bonding device itself can be started.

To use a bonding device dynamic RSS configuration feature effectively, it is also required, that all slaves should be RSS-capable and support, at least one common hash function available for each of them. Changing RSS key is only possible, when all slave devices support the same key size.

To prevent inconsistency on how slaves process packets, once a device is added to a bonding device, RSS configuration should be managed through the bonding device API, and not directly on the slave.

Like all other PMD, all functions exported by a PMD are lock-free functions that are assumed not to be invoked in parallel on different logical cores to work on the same target object.

It should also be noted that the PMD receive function should not be invoked directly on a slave devices after they have been to a bonded device since packets read directly from the slave device will no longer be available to the bonded device to read.

10.2.3. Configuration

Link bonding devices are created using the rte_eth_bond_create API which requires a unique device name, the bonding mode, and the socket Id to allocate the bonding device’s resources on. The other configurable parameters for a bonded device are its slave devices, its primary slave, a user defined MAC address and transmission policy to use if the device is in balance XOR mode.

10.2.3.1. Slave Devices

Bonding devices support up to a maximum of RTE_MAX_ETHPORTS slave devices of the same speed and duplex. Ethernet devices can be added as a slave to a maximum of one bonded device. Slave devices are reconfigured with the configuration of the bonded device on being added to a bonded device.

The bonded also guarantees to return the MAC address of the slave device to its original value of removal of a slave from it.

10.2.3.2. Primary Slave

The primary slave is used to define the default port to use when a bonded device is in active backup mode. A different port will only be used if, and only if, the current primary port goes down. If the user does not specify a primary port it will default to being the first port added to the bonded device.

10.2.3.3. MAC Address

The bonded device can be configured with a user specified MAC address, this address will be inherited by the some/all slave devices depending on the operating mode. If the device is in active backup mode then only the primary device will have the user specified MAC, all other slaves will retain their original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with the bonded devices MAC address.

If a user defined MAC address is not defined then the bonded device will default to using the primary slaves MAC address.

10.2.3.4. Balance XOR Transmit Policies

There are 3 supported transmission policies for bonded device running in Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.

  • Layer 2: Ethernet MAC address based balancing is the default transmission policy for Balance XOR bonding mode. It uses a simple XOR calculation on the source MAC address and destination MAC address of the packet and then calculate the modulus of this value to calculate the slave device to transmit the packet on.
  • Layer 2 + 3: Ethernet MAC address & IP Address based balancing uses a combination of source/destination MAC addresses and the source/destination IP addresses of the data packet to decide which slave port the packet will be transmitted on.
  • Layer 3 + 4: IP Address & UDP Port based balancing uses a combination of source/destination IP Address and the source/destination UDP ports of the packet of the data packet to decide which slave port the packet will be transmitted on.

All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6 and UDP protocols for load balancing.

10.3. Using Link Bonding Devices

The librte_pmd_bond library supports two modes of device creation, the libraries export full C API or using the EAL command line to statically configure link bonding devices at application startup. Using the EAL option it is possible to use link bonding functionality transparently without specific knowledge of the libraries API, this can be used, for example, to add bonding functionality, such as active backup, to an existing application which has no knowledge of the link bonding C API.

10.3.1. Using the Poll Mode Driver from an Application

Using the librte_pmd_bond libraries API it is possible to dynamically create and manage link bonding device from within any application. Link bonding devices are created using the rte_eth_bond_create API which requires a unique device name, the link bonding mode to initial the device in and finally the socket Id which to allocate the devices resources onto. After successful creation of a bonding device it must be configured using the generic Ethernet device configure API rte_eth_dev_configure and then the RX and TX queues which will be used must be setup using rte_eth_tx_queue_setup / rte_eth_rx_queue_setup.

Slave devices can be dynamically added and removed from a link bonding device using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs but at least one slave device must be added to the link bonding device before it can be started using rte_eth_dev_start.

The link status of a bonded device is dictated by that of its slaves, if all slave device link status are down or if all slaves are removed from the link bonding device then the link status of the bonding device will go down.

It is also possible to configure / query the configuration of the control parameters of a bonded device using the provided APIs rte_eth_bond_mode_set/ get, rte_eth_bond_primary_set/get, rte_eth_bond_mac_set/reset and rte_eth_bond_xmit_policy_set/get.

10.3.2. Using Link Bonding Devices from the EAL Command Line

Link bonding devices can be created at application startup time using the --vdev EAL command line option. The device name must start with the eth_bond prefix followed by numbers or letters. The name must be unique for each device. Each device can have multiple options arranged in a comma separated list. Multiple devices definitions can be arranged by calling the --vdev option multiple times.

Device names and bonding options must be separated by commas as shown below:

$RTE_TARGET/app/testpmd -c f -n 4 --vdev 'eth_bond0,bond_opt0=..,bond opt1=..'--vdev 'eth_bond1,bond _opt0=..,bond_opt1=..'

10.3.2.1. Link Bonding EAL Options

There are multiple ways of definitions that can be assessed and combined as long as the following two rules are respected:

  • A unique device name, in the format of eth_bondX is provided, where X can be any combination of numbers and/or letters, and the name is no greater than 32 characters long.
  • A least one slave device is provided with for each bonded device definition.
  • The operation mode of the bonded device being created is provided.

The different options are:

  • mode: Integer value defining the bonding mode of the device. Currently supports modes 0,1,2,3,4,5 (round-robin, active backup, balance, broadcast, link aggregation, transmit load balancing).
mode=2
  • slave: Defines the PMD device which will be added as slave to the bonded device. This option can be selected multiple times, for each device to be added as a slave. Physical devices should be specified using their PCI address, in the format domain:bus:devid.function
slave=0000:0a:00.0,slave=0000:0a:00.1
  • primary: Optional parameter which defines the primary slave port, is used in active backup mode to select the primary slave for data TX/RX if it is available. The primary port also is used to select the MAC address to use when it is not defined by the user. This defaults to the first slave added to the device if it is specified. The primary device must be a slave of the bonded device.
primary=0000:0a:00.0
  • socket_id: Optional parameter used to select which socket on a NUMA device the bonded devices resources will be allocated on.
socket_id=0
  • mac: Optional parameter to select a MAC address for link bonding device, this overrides the value of the primary slave device.
mac=00:1e:67:1d:fd:1d
  • xmit_policy: Optional parameter which defines the transmission policy when the bonded device is in balance mode. If not user specified this defaults to l2 (layer 2) forwarding, the other transmission policies available are l23 (layer 2+3) and l34 (layer 3+4)
xmit_policy=l23
  • lsc_poll_period_ms: Optional parameter which defines the polling interval in milli-seconds at which devices which don’t support lsc interrupts are checked for a change in the devices link status
lsc_poll_period_ms=100
  • up_delay: Optional parameter which adds a delay in milli-seconds to the propagation of a devices link status changing to up, by default this parameter is zero.
up_delay=10
  • down_delay: Optional parameter which adds a delay in milli-seconds to the propagation of a devices link status changing to down, by default this parameter is zero.
down_delay=50

10.3.2.2. Examples of Usage

Create a bonded device in round robin mode with two slaves specified by their PCI address:

$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=0, slave=0000:00a:00.01,slave=0000:004:00.00' -- --port-topology=chained

Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:

$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=0, slave=0000:00a:00.01,slave=0000:004:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained

Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:

$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=1, slave=0000:00a:00.01,slave=0000:004:00.00,primary=0000:00a:00.01' -- --port-topology=chained

Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:

$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'eth_bond0,mode=2, slave=0000:00a:00.01,slave=0000:004:00.00,xmit_policy=l34' -- --port-topology=chained
Next Previous

Built with Sphinx using a theme provided by Read the Docs.