Data Plane Development Kit
  • Getting Started Guide for Linux
    • 1. Introduction
      • 1.1. Documentation Roadmap
    • 2. System Requirements
      • 2.1. BIOS Setting Prerequisite on x86
      • 2.2. Compilation of the DPDK
      • 2.3. Running DPDK Applications
        • 2.3.1. System Software
        • 2.3.2. Use of Hugepages in the Linux Environment
        • 2.3.3. Xen Domain0 Support in the Linux Environment
    • 3. Compiling the DPDK Target from Source
      • 3.1. Install the DPDK and Browse Sources
      • 3.2. Installation of DPDK Target Environments
      • 3.3. Browsing the Installed DPDK Environment Target
      • 3.4. Loading Modules to Enable Userspace IO for DPDK
      • 3.5. Loading VFIO Module
      • 3.6. Binding and Unbinding Network Ports to/from the Kernel Modules
    • 4. Compiling and Running Sample Applications
      • 4.1. Compiling a Sample Application
      • 4.2. Running a Sample Application
        • 4.2.1. Logical Core Use by Applications
        • 4.2.2. Hugepage Memory Use by Applications
      • 4.3. Additional Sample Applications
      • 4.4. Additional Test Applications
    • 5. Enabling Additional Functionality
      • 5.1. High Precision Event Timer HPET) Functionality
        • 5.1.1. BIOS Support
        • 5.1.2. Linux Kernel Support
        • 5.1.3. Enabling HPET in the DPDK
      • 5.2. Running DPDK Applications Without Root Privileges
      • 5.3. Power Management and Power Saving Functionality
      • 5.4. Using Linux Core Isolation to Reduce Context Switches
      • 5.5. Loading the DPDK KNI Kernel Module
      • 5.6. Using Linux IOMMU Pass-Through to Run DPDK with Intel® VT-d
      • 5.7. High Performance of Small Packets on 40G NIC
        • 5.7.1. Use 16 Bytes RX Descriptor Size
        • 5.7.2. High Performance and per Packet Latency Tradeoff
    • 6. Quick Start Setup Script
      • 6.1. Script Organization
      • 6.2. Use Cases
      • 6.3. Applications
    • 7. How to get best performance with NICs on Intel platforms
      • 7.1. Hardware and Memory Requirements
        • 7.1.1. Network Interface Card Requirements
        • 7.1.2. BIOS Settings
        • 7.1.3. Linux boot command line
      • 7.2. Configurations before running DPDK
      • 7.3. Example of getting best performance for an Intel NIC
  • Getting Started Guide for FreeBSD
    • 1. Introduction
      • 1.1. Documentation Roadmap
    • 2. Installing DPDK from the Ports Collection
      • 2.1. Installing the DPDK FreeBSD Port
      • 2.2. Compiling and Running the Example Applications
    • 3. Compiling the DPDK Target from Source
      • 3.1. System Requirements
      • 3.2. Install the DPDK and Browse Sources
      • 3.3. Installation of the DPDK Target Environments
      • 3.4. Browsing the Installed DPDK Environment Target
      • 3.5. Loading the DPDK contigmem Module
      • 3.6. Loading the DPDK nic_uio Module
        • 3.6.1. Binding Network Ports to the nic_uio Module
        • 3.6.2. Binding Network Ports Back to their Original Kernel Driver
    • 4. Compiling and Running Sample Applications
      • 4.1. Compiling a Sample Application
      • 4.2. Running a Sample Application
      • 4.3. Running DPDK Applications Without Root Privileges
  • Xen Guide
    • 1. DPDK Xen Based Packet-Switching Solution
      • 1.1. Introduction
      • 1.2. Device Creation
        • 1.2.1. Poll Mode Driver Front End
        • 1.2.2. Switching Back End
        • 1.2.3. Packet Reception
        • 1.2.4. Packet Transmission
      • 1.3. Running the Application
        • 1.3.1. Validated Environment
        • 1.3.2. Xen Host Prerequisites
        • 1.3.3. Building and Running the Switching Backend
        • 1.3.4. Xen PMD Frontend Prerequisites
        • 1.3.5. Building and Running the Front End
        • 1.3.6. Usage Examples: Injecting a Packet Stream Using a Packet Generator
  • Programmer’s Guide
    • 1. Introduction
      • 1.1. Documentation Roadmap
      • 1.2. Related Publications
    • 2. Overview
      • 2.1. Development Environment
      • 2.2. Environment Abstraction Layer
      • 2.3. Core Components
        • 2.3.1. Ring Manager (librte_ring)
        • 2.3.2. Memory Pool Manager (librte_mempool)
        • 2.3.3. Network Packet Buffer Management (librte_mbuf)
        • 2.3.4. Timer Manager (librte_timer)
      • 2.4. Ethernet* Poll Mode Driver Architecture
      • 2.5. Packet Forwarding Algorithm Support
      • 2.6. librte_net
    • 3. Environment Abstraction Layer
      • 3.1. EAL in a Linux-userland Execution Environment
        • 3.1.1. Initialization and Core Launching
        • 3.1.2. Multi-process Support
        • 3.1.3. Memory Mapping Discovery and Memory Reservation
        • 3.1.4. Xen Dom0 support without hugetbls
        • 3.1.5. PCI Access
        • 3.1.6. Per-lcore and Shared Variables
        • 3.1.7. Logs
        • 3.1.8. CPU Feature Identification
        • 3.1.9. User Space Interrupt Event
        • 3.1.10. Blacklisting
        • 3.1.11. Misc Functions
      • 3.2. Memory Segments and Memory Zones (memzone)
      • 3.3. Multiple pthread
        • 3.3.1. EAL pthread and lcore Affinity
        • 3.3.2. non-EAL pthread support
        • 3.3.3. Public Thread API
        • 3.3.4. Known Issues
        • 3.3.5. cgroup control
      • 3.4. Malloc
        • 3.4.1. Cookies
        • 3.4.2. Alignment and NUMA Constraints
        • 3.4.3. Use Cases
        • 3.4.4. Internal Implementation
    • 4. Ring Library
      • 4.1. References for Ring Implementation in FreeBSD*
      • 4.2. Lockless Ring Buffer in Linux*
      • 4.3. Additional Features
        • 4.3.1. Name
        • 4.3.2. Water Marking
        • 4.3.3. Debug
      • 4.4. Use Cases
      • 4.5. Anatomy of a Ring Buffer
        • 4.5.1. Single Producer Enqueue
        • 4.5.2. Single Consumer Dequeue
        • 4.5.3. Multiple Producers Enqueue
        • 4.5.4. Modulo 32-bit Indexes
      • 4.6. References
    • 5. Mempool Library
      • 5.1. Cookies
      • 5.2. Stats
      • 5.3. Memory Alignment Constraints
      • 5.4. Local Cache
      • 5.5. Mempool Handlers
      • 5.6. Use Cases
    • 6. Mbuf Library
      • 6.1. Design of Packet Buffers
      • 6.2. Buffers Stored in Memory Pools
      • 6.3. Constructors
      • 6.4. Allocating and Freeing mbufs
      • 6.5. Manipulating mbufs
      • 6.6. Meta Information
      • 6.7. Direct and Indirect Buffers
      • 6.8. Debug
      • 6.9. Use Cases
    • 7. Poll Mode Driver
      • 7.1. Requirements and Assumptions
      • 7.2. Design Principles
      • 7.3. Logical Cores, Memory and NIC Queues Relationships
      • 7.4. Device Identification and Configuration
        • 7.4.1. Device Identification
        • 7.4.2. Device Configuration
        • 7.4.3. On-the-Fly Configuration
        • 7.4.4. Configuration of Transmit and Receive Queues
        • 7.4.5. Hardware Offload
      • 7.5. Poll Mode Driver API
        • 7.5.1. Generalities
        • 7.5.2. Generic Packet Representation
        • 7.5.3. Ethernet Device API
        • 7.5.4. Extended Statistics API
    • 8. Cryptography Device Library
      • 8.1. Design Principles
      • 8.2. Device Management
        • 8.2.1. Device Creation
        • 8.2.2. Device Identification
        • 8.2.3. Device Configuration
        • 8.2.4. Configuration of Queue Pairs
        • 8.2.5. Logical Cores, Memory and Queues Pair Relationships
      • 8.3. Device Features and Capabilities
        • 8.3.1. Device Features
        • 8.3.2. Device Operation Capabilities
        • 8.3.3. Capabilities Discovery
      • 8.4. Operation Processing
        • 8.4.1. Enqueue / Dequeue Burst APIs
        • 8.4.2. Operation Representation
        • 8.4.3. Operation Management and Allocation
      • 8.5. Symmetric Cryptography Support
        • 8.5.1. Session and Session Management
        • 8.5.2. Transforms and Transform Chaining
        • 8.5.3. Symmetric Operations
      • 8.6. Asymmetric Cryptography
        • 8.6.1. Crypto Device API
    • 9. IVSHMEM Library
      • 9.1. IVHSHMEM Library API Overview
      • 9.2. IVSHMEM Environment Configuration
      • 9.3. Best Practices for Writing IVSHMEM Applications
      • 9.4. Best Practices for Running IVSHMEM Applications
    • 10. Link Bonding Poll Mode Driver Library
      • 10.1. Link Bonding Modes Overview
      • 10.2. Implementation Details
        • 10.2.1. Link Status Change Interrupts / Polling
        • 10.2.2. Requirements / Limitations
        • 10.2.3. Configuration
      • 10.3. Using Link Bonding Devices
        • 10.3.1. Using the Poll Mode Driver from an Application
        • 10.3.2. Using Link Bonding Devices from the EAL Command Line
    • 11. Timer Library
      • 11.1. Implementation Details
      • 11.2. Use Cases
      • 11.3. References
    • 12. Hash Library
      • 12.1. Hash API Overview
      • 12.2. Multi-process support
      • 12.3. Implementation Details
      • 12.4. Entry distribution in hash table
      • 12.5. Use Case: Flow Classification
      • 12.6. References
    • 13. LPM Library
      • 13.1. LPM API Overview
      • 13.2. Implementation Details
        • 13.2.1. Addition
        • 13.2.2. Lookup
        • 13.2.3. Limitations in the Number of Rules
        • 13.2.4. Use Case: IPv4 Forwarding
        • 13.2.5. References
    • 14. LPM6 Library
      • 14.1. LPM6 API Overview
        • 14.1.1. Implementation Details
        • 14.1.2. Addition
        • 14.1.3. Lookup
        • 14.1.4. Limitations in the Number of Rules
      • 14.2. Use Case: IPv6 Forwarding
    • 15. Packet Distributor Library
      • 15.1. Distributor Core Operation
      • 15.2. Worker Operation
    • 16. Reorder Library
      • 16.1. Operation
      • 16.2. Implementation Details
      • 16.3. Use Case: Packet Distributor
    • 17. IP Fragmentation and Reassembly Library
      • 17.1. Packet fragmentation
      • 17.2. Packet reassembly
        • 17.2.1. IP Fragment Table
        • 17.2.2. Packet Reassembly
        • 17.2.3. Debug logging and Statistics Collection
    • 18. The librte_pdump Library
      • 18.1. Operation
      • 18.2. Implementation Details
      • 18.3. Use Case: Packet Capturing
    • 19. Multi-process Support
      • 19.1. Memory Sharing
      • 19.2. Deployment Models
        • 19.2.1. Symmetric/Peer Processes
        • 19.2.2. Asymmetric/Non-Peer Processes
        • 19.2.3. Running Multiple Independent DPDK Applications
        • 19.2.4. Running Multiple Independent Groups of DPDK Applications
      • 19.3. Multi-process Limitations
    • 20. Kernel NIC Interface
      • 20.1. The DPDK KNI Kernel Module
      • 20.2. KNI Creation and Deletion
      • 20.3. DPDK mbuf Flow
      • 20.4. Use Case: Ingress
      • 20.5. Use Case: Egress
      • 20.6. Ethtool
      • 20.7. Link state and MTU change
      • 20.8. KNI Working as a Kernel vHost Backend
        • 20.8.1. Overview
        • 20.8.2. Packet Flow
        • 20.8.3. Sample Usage
        • 20.8.4. Compatibility Configure Option
    • 21. Thread Safety of DPDK Functions
      • 21.1. Fast-Path APIs
      • 21.2. Performance Insensitive API
      • 21.3. Library Initialization
      • 21.4. Interrupt Thread
    • 22. Quality of Service (QoS) Framework
      • 22.1. Packet Pipeline with QoS Support
      • 22.2. Hierarchical Scheduler
        • 22.2.1. Overview
        • 22.2.2. Scheduling Hierarchy
        • 22.2.3. Application Programming Interface (API)
        • 22.2.4. Implementation
        • 22.2.5. Worst Case Scenarios for Performance
      • 22.3. Dropper
        • 22.3.1. Configuration
        • 22.3.2. Enqueue Operation
        • 22.3.3. Queue Empty Operation
        • 22.3.4. Source Files Location
        • 22.3.5. Integration with the DPDK QoS Scheduler
        • 22.3.6. Integration with the DPDK QoS Scheduler Sample Application
        • 22.3.7. Application Programming Interface (API)
      • 22.4. Traffic Metering
        • 22.4.1. Functional Overview
        • 22.4.2. Implementation Overview
    • 23. Power Management
      • 23.1. CPU Frequency Scaling
      • 23.2. Core-load Throttling through C-States
      • 23.3. API Overview of the Power Library
      • 23.4. User Cases
      • 23.5. References
    • 24. Packet Classification and Access Control
      • 24.1. Overview
        • 24.1.1. Rule definition
        • 24.1.2. RT memory size limit
        • 24.1.3. Classification methods
      • 24.2. Application Programming Interface (API) Usage
        • 24.2.1. Classify with Multiple Categories
    • 25. Packet Framework
      • 25.1. Design Objectives
      • 25.2. Overview
      • 25.3. Port Library Design
        • 25.3.1. Port Types
        • 25.3.2. Port Interface
      • 25.4. Table Library Design
        • 25.4.1. Table Types
        • 25.4.2. Table Interface
        • 25.4.3. Hash Table Design
      • 25.5. Pipeline Library Design
        • 25.5.1. Connectivity of Ports and Tables
        • 25.5.2. Port Actions
        • 25.5.3. Table Actions
      • 25.6. Multicore Scaling
        • 25.6.1. Shared Data Structures
      • 25.7. Interfacing with Accelerators
    • 26. Vhost Library
      • 26.1. Vhost API Overview
      • 26.2. Vhost Implementations
        • 26.2.1. Vhost-cuse implementation
        • 26.2.2. Vhost-user implementation
      • 26.3. Vhost supported vSwitch reference
    • 27. Port Hotplug Framework
      • 27.1. Overview
      • 27.2. Port Hotplug API overview
      • 27.3. Reference
      • 27.4. Limitations
    • 28. Source Organization
      • 28.1. Makefiles and Config
      • 28.2. Libraries
      • 28.3. Drivers
      • 28.4. Applications
    • 29. Development Kit Build System
      • 29.1. Building the Development Kit Binary
        • 29.1.1. Build Directory Concept
      • 29.2. Building External Applications
      • 29.3. Makefile Description
        • 29.3.1. General Rules For DPDK Makefiles
        • 29.3.2. Makefile Types
        • 29.3.3. Internally Generated Build Tools
        • 29.3.4. Useful Variables Provided by the Build System
        • 29.3.5. Variables that Can be Set/Overridden in a Makefile Only
        • 29.3.6. Variables that can be Set/Overridden by the User on the Command Line Only
        • 29.3.7. Variables that Can be Set/Overridden by the User in a Makefile or Command Line
    • 30. Development Kit Root Makefile Help
      • 30.1. Configuration Targets
      • 30.2. Build Targets
      • 30.3. Install Targets
      • 30.4. Test Targets
      • 30.5. Documentation Targets
      • 30.6. Deps Targets
      • 30.7. Misc Targets
      • 30.8. Other Useful Command-line Variables
      • 30.9. Make in a Build Directory
      • 30.10. Compiling for Debug
    • 31. Extending the DPDK
      • 31.1. Example: Adding a New Library libfoo
        • 31.1.1. Example: Using libfoo in the Test Application
    • 32. Building Your Own Application
      • 32.1. Compiling a Sample Application in the Development Kit Directory
      • 32.2. Build Your Own Application Outside the Development Kit
      • 32.3. Customizing Makefiles
        • 32.3.1. Application Makefile
        • 32.3.2. Library Makefile
        • 32.3.3. Customize Makefile Actions
    • 33. External Application/Library Makefile help
      • 33.1. Prerequisites
      • 33.2. Build Targets
      • 33.3. Help Targets
      • 33.4. Other Useful Command-line Variables
      • 33.5. Make from Another Directory
    • 34. Performance Optimization Guidelines
      • 34.1. Introduction
    • 35. Writing Efficient Code
      • 35.1. Memory
        • 35.1.1. Memory Copy: Do not Use libc in the Data Plane
        • 35.1.2. Memory Allocation
        • 35.1.3. Concurrent Access to the Same Memory Area
        • 35.1.4. NUMA
        • 35.1.5. Distribution Across Memory Channels
      • 35.2. Communication Between lcores
      • 35.3. PMD Driver
        • 35.3.1. Lower Packet Latency
      • 35.4. Locks and Atomic Operations
      • 35.5. Coding Considerations
        • 35.5.1. Inline Functions
        • 35.5.2. Branch Prediction
      • 35.6. Setting the Target CPU Type
    • 36. Profile Your Application
    • 37. Glossary
  • Network Interface Controller Drivers
    • 1. Overview of Networking Drivers
    • 2. BNX2X Poll Mode Driver
      • 2.1. Supported Features
      • 2.2. Non-supported Features
      • 2.3. Co-existence considerations
      • 2.4. Supported QLogic NICs
      • 2.5. Prerequisites
      • 2.6. Pre-Installation Configuration
        • 2.6.1. Config File Options
        • 2.6.2. Driver Compilation
      • 2.7. Linux
        • 2.7.1. Linux Installation
        • 2.7.2. Sample Application Notes
        • 2.7.3. SR-IOV: Prerequisites and sample Application Notes
    • 3. bnxt poll mode driver library
      • 3.1. Limitations
    • 4. CXGBE Poll Mode Driver
      • 4.1. Features
      • 4.2. Limitations
      • 4.3. Supported Chelsio T5 NICs
      • 4.4. Prerequisites
      • 4.5. Pre-Installation Configuration
        • 4.5.1. Config File Options
        • 4.5.2. Driver Compilation
      • 4.6. Linux
        • 4.6.1. Linux Installation
        • 4.6.2. Running testpmd
      • 4.7. FreeBSD
        • 4.7.1. FreeBSD Installation
        • 4.7.2. Running testpmd
      • 4.8. Sample Application Notes
        • 4.8.1. Enable/Disable Flow Control
        • 4.8.2. Jumbo Mode
    • 5. Driver for VM Emulated Devices
      • 5.1. Validated Hypervisors
      • 5.2. Recommended Guest Operating System in Virtual Machine
      • 5.3. Setting Up a KVM Virtual Machine
      • 5.4. Known Limitations of Emulated Devices
    • 6. ENA Poll Mode Driver
      • 6.1. Overview
      • 6.2. Management Interface
      • 6.3. Data Path Interface
      • 6.4. Configuration information
      • 6.5. Building DPDK
      • 6.6. Supported ENA adapters
      • 6.7. Supported Operating Systems
      • 6.8. Supported features
      • 6.9. Unsupported features
      • 6.10. Prerequisites
      • 6.11. Usage example
    • 7. ENIC Poll Mode Driver
      • 7.1. How to obtain ENIC PMD integrated DPDK
      • 7.2. Configuration information
      • 7.3. Limitations
      • 7.4. How to build the suite?
      • 7.5. Supported Cisco VIC adapters
      • 7.6. Supported Operating Systems
      • 7.7. Supported features
      • 7.8. Known bugs and Unsupported features in this release
      • 7.9. Prerequisites
      • 7.10. Additional Reference
      • 7.11. Contact Information
    • 8. FM10K Poll Mode Driver
      • 8.1. FTAG Based Forwarding of FM10K
      • 8.2. Vector PMD for FM10K
        • 8.2.1. RX Constraints
        • 8.2.2. TX Constraint
      • 8.3. Limitations
        • 8.3.1. Switch manager
        • 8.3.2. CRC striping
        • 8.3.3. Maximum packet length
        • 8.3.4. Statistic Polling Frequency
        • 8.3.5. Interrupt mode
    • 9. I40E Poll Mode Driver
      • 9.1. Features
      • 9.2. Prerequisites
      • 9.3. Pre-Installation Configuration
        • 9.3.1. Config File Options
        • 9.3.2. Driver Compilation
      • 9.4. Linux
        • 9.4.1. Running testpmd
        • 9.4.2. SR-IOV: Prerequisites and sample Application Notes
      • 9.5. Sample Application Notes
        • 9.5.1. Vlan filter
        • 9.5.2. Flow Director
        • 9.5.3. Floating VEB
    • 10. IXGBE Driver
      • 10.1. Vector PMD for IXGBE
        • 10.1.1. RX Constraints
        • 10.1.2. TX Constraint
        • 10.1.3. Sample Application Notes
      • 10.2. Malicious Driver Detection not Supported
      • 10.3. Statistics
    • 11. I40E/IXGBE/IGB Virtual Function Driver
      • 11.1. SR-IOV Mode Utilization in a DPDK Environment
        • 11.1.1. Physical and Virtual Function Infrastructure
        • 11.1.2. Validated Hypervisors
        • 11.1.3. Expected Guest Operating System in Virtual Machine
      • 11.2. Setting Up a KVM Virtual Machine Monitor
      • 11.3. DPDK SR-IOV PMD PF/VF Driver Usage Model
        • 11.3.1. Fast Host-based Packet Processing
      • 11.4. SR-IOV (PF/VF) Approach for Inter-VM Communication
    • 12. MLX4 poll mode driver library
      • 12.1. Implementation details
      • 12.2. Features
      • 12.3. Limitations
      • 12.4. Configuration
        • 12.4.1. Compilation options
        • 12.4.2. Environment variables
        • 12.4.3. Run-time configuration
        • 12.4.4. Kernel module parameters
      • 12.5. Prerequisites
        • 12.5.1. Getting Mellanox OFED
      • 12.6. Usage example
    • 13. MLX5 poll mode driver
      • 13.1. Implementation details
      • 13.2. Features
      • 13.3. Limitations
      • 13.4. Configuration
        • 13.4.1. Compilation options
        • 13.4.2. Environment variables
        • 13.4.3. Run-time configuration
      • 13.5. Prerequisites
        • 13.5.1. Getting Mellanox OFED
      • 13.6. Notes for testpmd
      • 13.7. Usage example
    • 14. NFP poll mode driver library
      • 14.1. Dependencies
      • 14.2. Building the software
      • 14.3. System configuration
    • 15. QEDE Poll Mode Driver
      • 15.1. Supported Features
      • 15.2. Non-supported Features
      • 15.3. Supported QLogic Adapters
      • 15.4. Prerequisites
        • 15.4.1. Performance note
        • 15.4.2. Config File Options
        • 15.4.3. Driver Compilation
        • 15.4.4. Sample Application Notes
        • 15.4.5. SR-IOV: Prerequisites and Sample Application Notes
    • 16. SZEDATA2 poll mode driver library
      • 16.1. Prerequisites
      • 16.2. Configuration
      • 16.3. Using the SZEDATA2 PMD
      • 16.4. Example of usage
    • 17. ThunderX NICVF Poll Mode Driver
      • 17.1. Features
      • 17.2. Supported ThunderX SoCs
      • 17.3. Prerequisites
      • 17.4. Pre-Installation Configuration
        • 17.4.1. Config File Options
        • 17.4.2. Driver Compilation
      • 17.5. Linux
        • 17.5.1. Running testpmd
        • 17.5.2. SR-IOV: Prerequisites and sample Application Notes
      • 17.6. Limitations
        • 17.6.1. CRC striping
        • 17.6.2. Maximum packet length
        • 17.6.3. Maximum packet segments
        • 17.6.4. Limited VFs
    • 18. Poll Mode Driver for Emulated Virtio NIC
      • 18.1. Virtio Implementation in DPDK
      • 18.2. Features and Limitations of virtio PMD
      • 18.3. Prerequisites
      • 18.4. Virtio with kni vhost Back End
      • 18.5. Virtio with qemu virtio Back End
      • 18.6. Virtio PMD Rx/Tx Callbacks
    • 19. Poll Mode Driver that wraps vhost library
      • 19.1. Vhost Implementation in DPDK
      • 19.2. Features and Limitations of vhost PMD
      • 19.3. Vhost PMD arguments
      • 19.4. Vhost PMD event handling
      • 19.5. Vhost PMD with testpmd application
    • 20. Poll Mode Driver for Paravirtual VMXNET3 NIC
      • 20.1. VMXNET3 Implementation in the DPDK
      • 20.2. Features and Limitations of VMXNET3 PMD
      • 20.3. Prerequisites
      • 20.4. VMXNET3 with a Native NIC Connected to a vSwitch
      • 20.5. VMXNET3 Chaining VMs Connected to a vSwitch
    • 21. Libpcap and Ring Based Poll Mode Drivers
      • 21.1. Using the Drivers from the EAL Command Line
        • 21.1.1. Libpcap-based PMD
        • 21.1.2. Rings-based PMD
        • 21.1.3. Using the Poll Mode Driver from an Application
  • Crypto Device Drivers
    • 1. Crypto Device Supported Functionality Matrices
    • 2. AESN-NI Multi Buffer Crytpo Poll Mode Driver
      • 2.1. Features
      • 2.2. Limitations
      • 2.3. Installation
      • 2.4. Initialization
    • 3. AES-NI GCM Crypto Poll Mode Driver
      • 3.1. Features
      • 3.2. Initialization
      • 3.3. Limitations
    • 4. KASUMI Crypto Poll Mode Driver
      • 4.1. Features
      • 4.2. Limitations
      • 4.3. Installation
      • 4.4. Initialization
    • 5. Null Crypto Poll Mode Driver
      • 5.1. Features
      • 5.2. Limitations
      • 5.3. Installation
      • 5.4. Initialization
    • 6. SNOW 3G Crypto Poll Mode Driver
      • 6.1. Features
      • 6.2. Limitations
      • 6.3. Installation
      • 6.4. Initialization
    • 7. Quick Assist Crypto Poll Mode Driver
      • 7.1. Features
      • 7.2. Limitations
      • 7.3. Installation
      • 7.4. Installation using 01.org QAT driver
      • 7.5. Installation using kernel.org driver
      • 7.6. Binding the available VFs to the DPDK UIO driver
  • Sample Applications User Guide
    • 1. Introduction
      • 1.1. Documentation Roadmap
    • 2. Command Line Sample Application
      • 2.1. Overview
      • 2.2. Compiling the Application
      • 2.3. Running the Application
      • 2.4. Explanation
        • 2.4.1. EAL Initialization and cmdline Start
        • 2.4.2. Defining a cmdline Context
    • 3. Ethtool Sample Application
      • 3.1. Compiling the Application
      • 3.2. Running the Application
      • 3.3. Using the application
      • 3.4. Explanation
        • 3.4.1. Packet Reflector
        • 3.4.2. Ethtool Shell
      • 3.5. Ethtool interface
    • 4. Exception Path Sample Application
      • 4.1. Overview
      • 4.2. Compiling the Application
      • 4.3. Running the Application
        • 4.3.1. Getting Statistics
      • 4.4. Explanation
        • 4.4.1. Initialization
        • 4.4.2. Packet Forwarding
        • 4.4.3. Managing TAP Interfaces and Bridges
    • 5. Hello World Sample Application
      • 5.1. Compiling the Application
      • 5.2. Running the Application
      • 5.3. Explanation
        • 5.3.1. EAL Initialization
        • 5.3.2. Starting Application Unit Lcores
    • 6. Basic Forwarding Sample Application
      • 6.1. Compiling the Application
      • 6.2. Running the Application
      • 6.3. Explanation
        • 6.3.1. The Main Function
        • 6.3.2. The Port Initialization Function
        • 6.3.3. The Lcores Main
    • 7. RX/TX Callbacks Sample Application
      • 7.1. Compiling the Application
      • 7.2. Running the Application
      • 7.3. Explanation
        • 7.3.1. The Main Function
        • 7.3.2. The Port Initialization Function
        • 7.3.3. The add_timestamps() Callback
        • 7.3.4. The calc_latency() Callback
    • 8. IP Fragmentation Sample Application
      • 8.1. Overview
      • 8.2. Building the Application
      • 8.3. Running the Application
    • 9. IPv4 Multicast Sample Application
      • 9.1. Overview
      • 9.2. Building the Application
      • 9.3. Running the Application
      • 9.4. Explanation
        • 9.4.1. Memory Pool Initialization
        • 9.4.2. Hash Initialization
        • 9.4.3. Forwarding
        • 9.4.4. Buffer Cloning
    • 10. IP Reassembly Sample Application
      • 10.1. Overview
      • 10.2. The Longest Prefix Match (LPM for IPv4, LPM6 for IPv6) table is used to store/lookup an outgoing port number, associated with that IPv4 address. Any unmatched packets are forwarded to the originating port.Compiling the Application
      • 10.3. Running the Application
      • 10.4. Explanation
        • 10.4.1. IPv4 Fragment Table Initialization
        • 10.4.2. Mempools Initialization
        • 10.4.3. Packet Reassembly and Forwarding
        • 10.4.4. Debug logging and Statistics Collection
    • 11. Kernel NIC Interface Sample Application
      • 11.1. Overview
      • 11.2. Compiling the Application
      • 11.3. Loading the Kernel Module
      • 11.4. Running the Application
      • 11.5. KNI Operations
      • 11.6. Explanation
        • 11.6.1. Initialization
        • 11.6.2. Packet Forwarding
        • 11.6.3. Callbacks for Kernel Requests
    • 12. Keep Alive Sample Application
      • 12.1. Overview
      • 12.2. Compiling the Application
      • 12.3. Running the Application
      • 12.4. Explanation
    • 13. L2 Forwarding with Crypto Sample Application
      • 13.1. Overview
      • 13.2. Compiling the Application
      • 13.3. Running the Application
      • 13.4. Explanation
        • 13.4.1. Crypto operation specification
        • 13.4.2. Crypto device initialization
        • 13.4.3. Session creation
        • 13.4.4. Crypto operation creation
        • 13.4.5. Crypto operation enqueuing/dequeuing
    • 14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with core load statistics.
      • 14.1. Overview
        • 14.1.1. Virtual Function Setup Instructions
      • 14.2. Compiling the Application
      • 14.3. Running the Application
      • 14.4. Explanation
        • 14.4.1. Command Line Arguments
        • 14.4.2. Mbuf Pool Initialization
        • 14.4.3. Driver Initialization
        • 14.4.4. RX Queue Initialization
        • 14.4.5. TX Queue Initialization
        • 14.4.6. Jobs statistics initialization
        • 14.4.7. Main loop
        • 14.4.8. Receive, Process and Transmit Packets
    • 15. L2 Forwarding Sample Application (in Real and Virtualized Environments)
      • 15.1. Overview
        • 15.1.1. Virtual Function Setup Instructions
      • 15.2. Compiling the Application
      • 15.3. Running the Application
      • 15.4. Explanation
        • 15.4.1. Command Line Arguments
        • 15.4.2. Mbuf Pool Initialization
        • 15.4.3. Driver Initialization
        • 15.4.4. RX Queue Initialization
        • 15.4.5. TX Queue Initialization
        • 15.4.6. Receive, Process and Transmit Packets
    • 16. L2 Forwarding Sample Application with Cache Allocation Technology (CAT)
      • 16.1. Compiling the Application
      • 16.2. Running the Application
      • 16.3. Explanation
        • 16.3.1. The Main Function
    • 17. L3 Forwarding Sample Application
      • 17.1. Overview
      • 17.2. Compiling the Application
      • 17.3. Running the Application
      • 17.4. Explanation
        • 17.4.1. Hash Initialization
        • 17.4.2. LPM Initialization
        • 17.4.3. Packet Forwarding for Hash-based Lookups
        • 17.4.4. Packet Forwarding for LPM-based Lookups
    • 18. L3 Forwarding with Power Management Sample Application
      • 18.1. Introduction
      • 18.2. Overview
      • 18.3. Compiling the Application
      • 18.4. Running the Application
      • 18.5. Explanation
        • 18.5.1. Power Library Initialization
        • 18.5.2. Monitoring Loads of Rx Queues
        • 18.5.3. P-State Heuristic Algorithm
        • 18.5.4. C-State Heuristic Algorithm
    • 19. L3 Forwarding with Access Control Sample Application
      • 19.1. Overview
        • 19.1.1. Tuple Packet Syntax
        • 19.1.2. Access Rule Syntax
        • 19.1.3. ACL and Route Rules
        • 19.1.4. Rules File Example
        • 19.1.5. Application Phases
      • 19.2. Compiling the Application
      • 19.3. Running the Application
      • 19.4. Explanation
        • 19.4.1. Parse Rules from File
        • 19.4.2. Setting Up the ACL Context
    • 20. L3 Forwarding in a Virtualization Environment Sample Application
      • 20.1. Overview
      • 20.2. Compiling the Application
      • 20.3. Running the Application
      • 20.4. Explanation
    • 21. Link Status Interrupt Sample Application
      • 21.1. Overview
      • 21.2. Compiling the Application
      • 21.3. Running the Application
      • 21.4. Explanation
        • 21.4.1. Command Line Arguments
        • 21.4.2. Mbuf Pool Initialization
        • 21.4.3. Driver Initialization
        • 21.4.4. Interrupt Callback Registration
        • 21.4.5. RX Queue Initialization
        • 21.4.6. TX Queue Initialization
        • 21.4.7. Receive, Process and Transmit Packets
    • 22. Load Balancer Sample Application
      • 22.1. Overview
        • 22.1.1. I/O RX Logical Cores
        • 22.1.2. I/O TX Logical Cores
        • 22.1.3. Worker Logical Cores
      • 22.2. Compiling the Application
      • 22.3. Running the Application
      • 22.4. Explanation
        • 22.4.1. Application Configuration
        • 22.4.2. NUMA Support
    • 23. Multi-process Sample Application
      • 23.1. Example Applications
        • 23.1.1. Building the Sample Applications
        • 23.1.2. Basic Multi-process Example
        • 23.1.3. Symmetric Multi-process Example
        • 23.1.4. Client-Server Multi-process Example
        • 23.1.5. Master-slave Multi-process Example
    • 24. QoS Metering Sample Application
      • 24.1. Overview
      • 24.2. Compiling the Application
      • 24.3. Running the Application
      • 24.4. Explanation
    • 25. QoS Scheduler Sample Application
      • 25.1. Overview
      • 25.2. Compiling the Application
      • 25.3. Running the Application
        • 25.3.1. Interactive mode
        • 25.3.2. Example
      • 25.4. Explanation
    • 26. Intel® QuickAssist Technology Sample Application
      • 26.1. Overview
        • 26.1.1. Setup
      • 26.2. Building the Application
      • 26.3. Running the Application
        • 26.3.1. Intel® QuickAssist Technology Configuration Files
        • 26.3.2. Traffic Generator Setup and Application Startup
    • 27. Quota and Watermark Sample Application
      • 27.1. Overview
      • 27.2. Compiling the Application
      • 27.3. Running the Application
        • 27.3.1. Running the Core Application
        • 27.3.2. Running the Control Application
      • 27.4. Code Overview
        • 27.4.1. Core Application - qw
        • 27.4.2. Control Application - qwctl
    • 28. Timer Sample Application
      • 28.1. Compiling the Application
      • 28.2. Running the Application
      • 28.3. Explanation
        • 28.3.1. Initialization and Main Loop
        • 28.3.2. Managing Timers
    • 29. Packet Ordering Application
      • 29.1. Overview
      • 29.2. Compiling the Application
      • 29.3. Running the Application
        • 29.3.1. Application Command Line
    • 30. VMDQ and DCB Forwarding Sample Application
      • 30.1. Overview
      • 30.2. Compiling the Application
      • 30.3. Running the Application
      • 30.4. Explanation
        • 30.4.1. Initialization
        • 30.4.2. Statistics Display
    • 31. Vhost Sample Application
      • 31.1. Background
      • 31.2. Sample Code Overview
      • 31.3. Supported Distributions
      • 31.4. Prerequisites
        • 31.4.1. Installing Packages on the Host(vhost cuse required)
        • 31.4.2. QEMU simulator
        • 31.4.3. Setting up the Execution Environment
        • 31.4.4. Setting up the Guest Execution Environment
      • 31.5. Compiling the Sample Code
      • 31.6. Running the Sample Code
        • 31.6.1. Parameters
      • 31.7. Running the Virtual Machine (QEMU)
        • 31.7.1. Redirecting QEMU to vhost-net Sample Code(vhost cuse)
        • 31.7.2. Mapping the Virtual Machine’s Memory
        • 31.7.3. QEMU Wrapper Script
        • 31.7.4. Libvirt Integration
        • 31.7.5. Common Issues
      • 31.8. Running DPDK in the Virtual Machine
        • 31.8.1. Testpmd MAC Forwarding
        • 31.8.2. Running Testpmd
      • 31.9. Passing Traffic to the Virtual Machine Device
      • 31.10. Running virtio_user with vhost-switch
    • 32. Netmap Compatibility Sample Application
      • 32.1. Introduction
      • 32.2. Available APIs
      • 32.3. Caveats
      • 32.4. Porting Netmap Applications
      • 32.5. Compiling the “bridge” Sample Application
      • 32.6. Running the “bridge” Sample Application
    • 33. Internet Protocol (IP) Pipeline Application
      • 33.1. Application overview
      • 33.2. Design goals
        • 33.2.1. Rapid development
        • 33.2.2. Flexibility
        • 33.2.3. Performance
        • 33.2.4. Debug capabilities
      • 33.3. Running the application
      • 33.4. Application stages
        • 33.4.1. Configuration
        • 33.4.2. Configuration checking
        • 33.4.3. Initialization
        • 33.4.4. Run-time
      • 33.5. Configuration file syntax
        • 33.5.1. Syntax overview
        • 33.5.2. Application resources present in the configuration file
        • 33.5.3. Rules to parse the configuration file
        • 33.5.4. PIPELINE section
        • 33.5.5. MEMPOOL section
        • 33.5.6. LINK section
        • 33.5.7. RXQ section
        • 33.5.8. TXQ section
        • 33.5.9. SWQ section
        • 33.5.10. TM section
        • 33.5.11. KNI section
        • 33.5.12. SOURCE section
        • 33.5.13. SINK section
        • 33.5.14. MSGQ section
        • 33.5.15. EAL section
      • 33.6. Library of pipeline types
        • 33.6.1. Pipeline module
        • 33.6.2. List of pipeline types
      • 33.7. Command Line Interface (CLI)
        • 33.7.1. Global CLI commands
        • 33.7.2. CLI commands for link configuration
        • 33.7.3. CLI commands common for all pipeline types
        • 33.7.4. Pipeline type specific CLI commands
    • 34. Test Pipeline Application
      • 34.1. Overview
      • 34.2. Compiling the Application
      • 34.3. Running the Application
        • 34.3.1. Application Command Line
        • 34.3.2. Table Types and Behavior
        • 34.3.3. Input Traffic
    • 35. Distributor Sample Application
      • 35.1. Overview
      • 35.2. Compiling the Application
      • 35.3. Running the Application
      • 35.4. Explanation
      • 35.5. Debug Logging Support
      • 35.6. Statistics
      • 35.7. Application Initialization
    • 36. VM Power Management Application
      • 36.1. Introduction
      • 36.2. Overview
        • 36.2.1. Performance Considerations
      • 36.3. Configuration
        • 36.3.1. BIOS
        • 36.3.2. Host Operating System
        • 36.3.3. Hypervisor Channel Configuration
      • 36.4. Compiling and Running the Host Application
        • 36.4.1. Compiling
        • 36.4.2. Running
      • 36.5. Compiling and Running the Guest Applications
        • 36.5.1. Compiling
        • 36.5.2. Running
    • 37. TEP termination Sample Application
      • 37.1. Background
      • 37.2. Sample Code Overview
      • 37.3. Supported Distributions
      • 37.4. Prerequisites
      • 37.5. Compiling the Sample Code
      • 37.6. Running the Sample Code
        • 37.6.1. Parameters
      • 37.7. Running the Virtual Machine (QEMU)
      • 37.8. Running DPDK in the Virtual Machine
      • 37.9. Passing Traffic to the Virtual Machine Device
    • 38. PTP Client Sample Application
      • 38.1. Limitations
      • 38.2. How the Application Works
      • 38.3. Compiling the Application
      • 38.4. Running the Application
      • 38.5. Code Explanation
        • 38.5.1. The Main Function
        • 38.5.2. The Lcores Main
        • 38.5.3. PTP parsing
    • 39. Performance Thread Sample Application
      • 39.1. Overview
      • 39.2. Compiling the Application
      • 39.3. Running the Application
        • 39.3.1. Running with L-threads
        • 39.3.2. Running with EAL threads
        • 39.3.3. Examples
      • 39.4. Explanation
        • 39.4.1. Mode of operation with EAL threads
        • 39.4.2. Mode of operation with L-threads
        • 39.4.3. CPU load statistics
      • 39.5. The L-thread subsystem
        • 39.5.1. Comparison between L-threads and POSIX pthreads
        • 39.5.2. Constraints and performance implications when using L-threads
        • 39.5.3. Porting legacy code to run on L-threads
        • 39.5.4. Pthread shim
        • 39.5.5. L-thread Diagnostics
    • 40. IPsec Security Gateway Sample Application
      • 40.1. Overview
      • 40.2. Constraints
      • 40.3. Compiling the Application
      • 40.4. Running the Application
      • 40.5. Configurations
        • 40.5.1. Security Policy Initialization
        • 40.5.2. Security Association Initialization
        • 40.5.3. Routing Initialization
  • Tool User Guides
    • 1. dpdk-procinfo Application
      • 1.1. Running the Application
        • 1.1.1. Parameters
    • 2. dpdk-pdump Application
      • 2.1. Running the Application
        • 2.1.1. The --pdump parameters
      • 2.2. Example
    • 3. dpdk-pmdinfo Application
      • 3.1. Running the Application
    • 4. dpdk-devbind Application
      • 4.1. Running the Application
      • 4.2. OPTIONS
      • 4.3. Examples
  • Testpmd Application User Guide
    • 1. Introduction
    • 2. Compiling the Application
    • 3. Running the Application
      • 3.1. EAL Command-line Options
      • 3.2. Testpmd Command-line Options
    • 4. Testpmd Runtime Functions
      • 4.1. Help Functions
      • 4.2. Control Functions
        • 4.2.1. start
        • 4.2.2. start tx_first
        • 4.2.3. stop
        • 4.2.4. quit
      • 4.3. Display Functions
        • 4.3.1. show port
        • 4.3.2. show port rss reta
        • 4.3.3. show port rss-hash
        • 4.3.4. clear port
        • 4.3.5. show (rxq|txq)
        • 4.3.6. show config
        • 4.3.7. set fwd
        • 4.3.8. read rxd
        • 4.3.9. read txd
      • 4.4. Configuration Functions
        • 4.4.1. set default
        • 4.4.2. set verbose
        • 4.4.3. set nbport
        • 4.4.4. set nbcore
        • 4.4.5. set coremask
        • 4.4.6. set portmask
        • 4.4.7. set burst
        • 4.4.8. set txpkts
        • 4.4.9. set txsplit
        • 4.4.10. set corelist
        • 4.4.11. set portlist
        • 4.4.12. vlan set strip
        • 4.4.13. vlan set stripq
        • 4.4.14. vlan set filter
        • 4.4.15. vlan set qinq
        • 4.4.16. vlan set tpid
        • 4.4.17. rx_vlan add
        • 4.4.18. rx_vlan rm
        • 4.4.19. rx_vlan add (for VF)
        • 4.4.20. rx_vlan rm (for VF)
        • 4.4.21. tunnel_filter add
        • 4.4.22. tunnel_filter remove
        • 4.4.23. rx_vxlan_port add
        • 4.4.24. rx_vxlan_port remove
        • 4.4.25. tx_vlan set
        • 4.4.26. tx_vlan set pvid
        • 4.4.27. tx_vlan reset
        • 4.4.28. csum set
        • 4.4.29. csum parse-tunnel
        • 4.4.30. csum show
        • 4.4.31. tso set
        • 4.4.32. tso show
        • 4.4.33. mac_addr add
        • 4.4.34. mac_addr remove
        • 4.4.35. mac_addr add(for VF)
        • 4.4.36. set port-uta
        • 4.4.37. set promisc
        • 4.4.38. set allmulti
        • 4.4.39. set flow_ctrl rx
        • 4.4.40. set pfc_ctrl rx
        • 4.4.41. set stat_qmap
        • 4.4.42. set port - rx/tx (for VF)
        • 4.4.43. set port - mac address filter (for VF)
        • 4.4.44. set port - rx mode(for VF)
        • 4.4.45. set port - tx_rate (for Queue)
        • 4.4.46. set port - tx_rate (for VF)
        • 4.4.47. set port - mirror rule
        • 4.4.48. reset port - mirror rule
        • 4.4.49. set flush_rx
        • 4.4.50. set bypass mode
        • 4.4.51. set bypass event
        • 4.4.52. set bypass timeout
        • 4.4.53. show bypass config
        • 4.4.54. set link up
        • 4.4.55. set link down
        • 4.4.56. E-tag set
      • 4.5. Port Functions
        • 4.5.1. port attach
        • 4.5.2. port detach
        • 4.5.3. port start
        • 4.5.4. port stop
        • 4.5.5. port close
        • 4.5.6. port start/stop queue
        • 4.5.7. port config - speed
        • 4.5.8. port config - queues/descriptors
        • 4.5.9. port config - max-pkt-len
        • 4.5.10. port config - CRC Strip
        • 4.5.11. port config - scatter
        • 4.5.12. port config - TX queue flags
        • 4.5.13. port config - RX Checksum
        • 4.5.14. port config - VLAN
        • 4.5.15. port config - VLAN filter
        • 4.5.16. port config - VLAN strip
        • 4.5.17. port config - VLAN extend
        • 4.5.18. port config - Drop Packets
        • 4.5.19. port config - RSS
        • 4.5.20. port config - RSS Reta
        • 4.5.21. port config - DCB
        • 4.5.22. port config - Burst
        • 4.5.23. port config - Threshold
        • 4.5.24. port config - E-tag
      • 4.6. Link Bonding Functions
        • 4.6.1. create bonded device
        • 4.6.2. add bonding slave
        • 4.6.3. remove bonding slave
        • 4.6.4. set bonding mode
        • 4.6.5. set bonding primary
        • 4.6.6. set bonding mac
        • 4.6.7. set bonding xmit_balance_policy
        • 4.6.8. set bonding mon_period
        • 4.6.9. show bonding config
      • 4.7. Register Functions
        • 4.7.1. read reg
        • 4.7.2. read regfield
        • 4.7.3. read regbit
        • 4.7.4. write reg
        • 4.7.5. write regfield
        • 4.7.6. write regbit
      • 4.8. Filter Functions
        • 4.8.1. ethertype_filter
        • 4.8.2. 2tuple_filter
        • 4.8.3. 5tuple_filter
        • 4.8.4. syn_filter
        • 4.8.5. flex_filter
        • 4.8.6. flow_director_filter
        • 4.8.7. flush_flow_director
        • 4.8.8. flow_director_mask
        • 4.8.9. flow_director_flex_mask
        • 4.8.10. flow_director_flex_payload
        • 4.8.11. get_sym_hash_ena_per_port
        • 4.8.12. set_sym_hash_ena_per_port
        • 4.8.13. get_hash_global_config
        • 4.8.14. set_hash_global_config
        • 4.8.15. set_hash_input_set
        • 4.8.16. set_fdir_input_set
        • 4.8.17. global_config
  • FAQ
    • 1. What does “EAL: map_all_hugepages(): open failed: Permission denied Cannot init memory” mean?
    • 2. If I want to change the number of TLB Hugepages allocated, how do I remove the original pages allocated?
    • 3. If I execute “l2fwd -c f -m 64 -n 3 – -p 3”, I get the following output, indicating that there are no socket 0 hugepages to allocate the mbuf and ring structures to?
    • 4. I am running a 32-bit DPDK application on a NUMA system, and sometimes the application initializes fine but cannot allocate memory. Why is that happening?
    • 5. On application startup, there is a lot of EAL information printed. Is there any way to reduce this?
    • 6. How can I tune my network application to achieve lower latency?
    • 7. Without NUMA enabled, my network throughput is low, why?
    • 8. I am getting errors about not being able to open files. Why?
    • 9. VF driver for IXGBE devices cannot be initialized
    • 10. Is it safe to add an entry to the hash table while running?
    • 11. What is the purpose of setting iommu=pt?
    • 12. When trying to send packets from an application to itself, meaning smac==dmac, using Intel(R) 82599 VF packets are lost.
    • 13. Can I split packet RX to use DPDK and have an application’s higher order functions continue using Linux pthread?
    • 14. Is it possible to exchange data between DPDK processes and regular userspace processes via some shared memory or IPC mechanism?
    • 15. Can the multiple queues in Intel(R) I350 be used with DPDK?
    • 16. How can hugepage-backed memory be shared among multiple processes?
  • How To User Guides
    • 1. Live Migration of VM with SR-IOV VF
      • 1.1. Overview
      • 1.2. Test Setup
      • 1.3. Live Migration steps
        • 1.3.1. On host_server_1: Terminal 1
        • 1.3.2. On host_server_1: Terminal 2
        • 1.3.3. On host_server_1: Terminal 1
        • 1.3.4. On host_server_1: Terminal 2
        • 1.3.5. On host_server_1: Terminal 1
        • 1.3.6. On host_server_2: Terminal 1
        • 1.3.7. On host_server_2: Terminal 2
        • 1.3.8. On host_server_1: Terminal 2
        • 1.3.9. On host_server_2: Terminal 1
        • 1.3.10. On host_server_2: Terminal 2
        • 1.3.11. On host_server_2: Terminal 1
      • 1.4. Sample host scripts
        • 1.4.1. setup_vf_on_212_46.sh
        • 1.4.2. vm_virtio_vf_one_212_46.sh
        • 1.4.3. setup_bridge_on_212_46.sh
        • 1.4.4. connect_to_qemu_mon_on_host.sh
        • 1.4.5. setup_vf_on_212_131.sh
        • 1.4.6. vm_virtio_one_migrate.sh
        • 1.4.7. setup_bridge_on_212_131.sh
      • 1.5. Sample VM scripts
        • 1.5.1. setup_dpdk_in_vm.sh
        • 1.5.2. run_testpmd_bonding_in_vm.sh
      • 1.6. Sample switch configuration
        • 1.6.1. On Switch: Terminal 1
        • 1.6.2. On Switch: Terminal 2
        • 1.6.3. On Switch: Terminal 1
        • 1.6.4. Sample switch configuration script
    • 2. Live Migration of VM with Virtio on host running vhost_user
      • 2.1. Overview
      • 2.2. Test Setup
      • 2.3. Live Migration steps
        • 2.3.1. On host_server_1: Terminal 1
        • 2.3.2. On host_server_1: Terminal 2
        • 2.3.3. On host_server_1: Terminal 3
        • 2.3.4. On host_server_1: Terminal 1
        • 2.3.5. On host_server_1: Terminal 4
        • 2.3.6. On host_server_1: Terminal 1
        • 2.3.7. On host_server_2: Terminal 1
        • 2.3.8. On host_server_2: Terminal 2
        • 2.3.9. On host_server_2: Terminal 3
        • 2.3.10. On host_server_2: Terminal 1
        • 2.3.11. On host_server_2: Terminal 4
        • 2.3.12. On host_server_1: Terminal 4
        • 2.3.13. On host_server_2: Terminal 1
        • 2.3.14. On host_server_2: Terminal 4
        • 2.3.15. On host_server_2: Terminal 1
      • 2.4. Sample host scripts
        • 2.4.1. reset_vf_on_212_46.sh
        • 2.4.2. vm_virtio_vhost_user.sh
        • 2.4.3. connect_to_qemu_mon_on_host.sh
        • 2.4.4. reset_vf_on_212_131.sh
        • 2.4.5. vm_virtio_vhost_user_migrate.sh
      • 2.5. Sample VM scripts
        • 2.5.1. setup_dpdk_virtio_in_vm.sh
        • 2.5.2. run_testpmd_in_vm.sh
    • 3. Flow Bifurcation How-to Guide
      • 3.1. Using Flow Bifurcation on IXGBE in Linux
      • 3.2. Using Flow Bifurcation on I40E in Linux
  • Release Notes
    • 1. Description of Release
    • 2. DPDK Release 16.07
      • 2.1. New Features
      • 2.2. Resolved Issues
        • 2.2.1. EAL
        • 2.2.2. Drivers
        • 2.2.3. Libraries
        • 2.2.4. Examples
        • 2.2.5. Other
      • 2.3. Known Issues
      • 2.4. API Changes
      • 2.5. ABI Changes
      • 2.6. Shared Library Versions
      • 2.7. Tested Platforms
      • 2.8. Tested NICs
      • 2.9. Tested OSes
    • 3. DPDK Release 16.04
      • 3.1. New Features
      • 3.2. Resolved Issues
        • 3.2.1. Drivers
        • 3.2.2. Libraries
        • 3.2.3. Examples
      • 3.3. API Changes
      • 3.4. ABI Changes
      • 3.5. Shared Library Versions
      • 3.6. Tested Platforms
      • 3.7. Tested NICs
    • 4. DPDK Release 2.2
      • 4.1. New Features
      • 4.2. Resolved Issues
        • 4.2.1. EAL
        • 4.2.2. Drivers
        • 4.2.3. Libraries
        • 4.2.4. Examples
        • 4.2.5. Other
      • 4.3. Known Issues
      • 4.4. API Changes
      • 4.5. ABI Changes
      • 4.6. Shared Library Versions
    • 5. DPDK Release 2.1
      • 5.1. New Features
      • 5.2. Resolved Issues
      • 5.3. Known Issues
      • 5.4. API Changes
      • 5.5. ABI Changes
    • 6. DPDK Release 2.0
      • 6.1. New Features
    • 7. DPDK Release 1.8
      • 7.1. New Features
    • 8. Supported Operating Systems
    • 9. Known Issues and Limitations in Legacy Releases
      • 9.1. Unit Test for Link Bonding may fail at test_tlb_tx_burst()
      • 9.2. Pause Frame Forwarding does not work properly on igb
      • 9.3. In packets provided by the PMD, some flags are missing
      • 9.4. The rte_malloc library is not fully implemented
      • 9.5. HPET reading is slow
      • 9.6. HPET timers do not work on the Osage customer reference platform
      • 9.7. Not all variants of supported NIC types have been used in testing
      • 9.8. Multi-process sample app requires exact memory mapping
      • 9.9. Packets are not sent by the 1 GbE/10 GbE SR-IOV driver when the source MAC is not the MAC assigned to the VF NIC
      • 9.10. SR-IOV drivers do not fully implement the rte_ethdev API
      • 9.11. PMD does not work with –no-huge EAL command line parameter
      • 9.12. Some hardware off-load functions are not supported by the VF Driver
      • 9.13. Kernel crash on IGB port unbinding
      • 9.14. Twinpond and Ironpond NICs do not report link status correctly
      • 9.15. Discrepancies between statistics reported by different NICs
      • 9.16. Error reported opening files on DPDK initialization
      • 9.17. Intel® QuickAssist Technology sample application does not work on a 32-bit OS on Shumway
      • 9.18. Differences in how different Intel NICs handle maximum packet length for jumbo frame
      • 9.19. Binding PCI devices to igb_uio fails on Linux kernel 3.9 when more than one device is used
      • 9.20. GCC might generate Intel® AVX instructions for processors without Intel® AVX support
      • 9.21. Ethertype filter could receive other packets (non-assigned) in Niantic
      • 9.22. Cannot set link speed on Intel® 40G Ethernet controller
      • 9.23. Devices bound to igb_uio with VT-d enabled do not work on Linux kernel 3.15-3.17
      • 9.24. VM power manager may not work on systems with more than 64 cores
      • 9.25. DPDK may not build on some Intel CPUs using clang < 3.7.0
      • 9.26. The last EAL argument is replaced by the program name in argv[]
      • 9.27. I40e VF may not receive packets in the promiscuous mode
    • 10. ABI and API Deprecation
      • 10.1. Deprecation Notices
  • Contributor’s Guidelines
    • 1. DPDK Coding Style
      • 1.1. Description
      • 1.2. General Guidelines
      • 1.3. C Comment Style
        • 1.3.1. Usual Comments
        • 1.3.2. License Header
      • 1.4. C Preprocessor Directives
        • 1.4.1. Header Includes
        • 1.4.2. Header File Guards
        • 1.4.3. Macros
        • 1.4.4. Conditional Compilation
      • 1.5. C Types
        • 1.5.1. Integers
        • 1.5.2. Enumerations
        • 1.5.3. Bitfields
        • 1.5.4. Variable Declarations
        • 1.5.5. Structure Declarations
        • 1.5.6. Queues
        • 1.5.7. Typedefs
      • 1.6. C Indentation
        • 1.6.1. General
        • 1.6.2. Control Statements and Loops
        • 1.6.3. Function Calls
        • 1.6.4. Operators
        • 1.6.5. Exit
        • 1.6.6. Local Variables
        • 1.6.7. Casts and sizeof
      • 1.7. C Function Definition, Declaration and Use
        • 1.7.1. Prototypes
        • 1.7.2. Definitions
      • 1.8. C Statement Style and Conventions
        • 1.8.1. NULL Pointers
        • 1.8.2. Return Value
        • 1.8.3. Logging and Errors
        • 1.8.4. Branch Prediction
        • 1.8.5. Static Variables and Functions
        • 1.8.6. Const Attribute
        • 1.8.7. Inline ASM in C code
        • 1.8.8. Control Statements
      • 1.9. Python Code
    • 2. Design
      • 2.1. Environment or Architecture-specific Sources
        • 2.1.1. Per Architecture Sources
        • 2.1.2. Per Execution Environment Sources
      • 2.2. Library Statistics
        • 2.2.1. Description
        • 2.2.2. Mechanism to allow the application to turn library statistics on and off
        • 2.2.3. Prevention of ABI changes due to library statistics support
        • 2.2.4. Motivation to allow the application to turn library statistics on and off
    • 3. Managing ABI updates
      • 3.1. Description
      • 3.2. General Guidelines
      • 3.3. What is an ABI
      • 3.4. The DPDK ABI policy
      • 3.5. Examples of Deprecation Notices
      • 3.6. Versioning Macros
      • 3.7. Examples of ABI Macro use
        • 3.7.1. Updating a public API
        • 3.7.2. Deprecating part of a public API
        • 3.7.3. Deprecating an entire ABI version
      • 3.8. Running the ABI Validator
    • 4. DPDK Documentation Guidelines
      • 4.1. Structure of the Documentation
      • 4.2. Role of the Documentation
      • 4.3. Building the Documentation
        • 4.3.1. Dependencies
        • 4.3.2. Build commands
      • 4.4. Document Guidelines
      • 4.5. RST Guidelines
        • 4.5.1. Line Length
        • 4.5.2. Whitespace
        • 4.5.3. Section Headers
        • 4.5.4. Lists
        • 4.5.5. Code and Literal block sections
        • 4.5.6. Images
        • 4.5.7. Tables
        • 4.5.8. Hyperlinks
      • 4.6. Doxygen Guidelines
    • 5. Contributing Code to DPDK
      • 5.1. The DPDK Development Process
      • 5.2. Getting the Source Code
      • 5.3. Make your Changes
      • 5.4. Commit Messages: Subject Line
      • 5.5. Commit Messages: Body
      • 5.6. Creating Patches
      • 5.7. Checking the Patches
      • 5.8. Checking Compilation
      • 5.9. Sending Patches
      • 5.10. The Review Process
    • 6. Patch Cheatsheet
 
Data Plane Development Kit
  • Docs »
  • Sample Applications User Guide »
  • 21. Link Status Interrupt Sample Application
  • View page source

21. Link Status Interrupt Sample Application

The Link Status Interrupt sample application is a simple example of packet processing using the Data Plane Development Kit (DPDK) that demonstrates how network link status changes for a network port can be captured and used by a DPDK application.

21.1. Overview

The Link Status Interrupt sample application registers a user space callback for the link status interrupt of each port and performs L2 forwarding for each packet that is received on an RX_PORT. The following operations are performed:

  • RX_PORT and TX_PORT are paired with available ports one-by-one according to the core mask
  • The source MAC address is replaced by the TX_PORT MAC address
  • The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID

This application can be used to demonstrate the usage of link status interrupt and its user space callbacks and the behavior of L2 forwarding each time the link status changes.

21.2. Compiling the Application

  1. Go to the example directory:

    export RTE_SDK=/path/to/rte_sdk
    cd ${RTE_SDK}/examples/link_status_interrupt
    
  2. Set the target (a default target is used if not specified). For example:

    export RTE_TARGET=x86_64-native-linuxapp-gcc
    

    See the DPDK Getting Started Guide for possible RTE_TARGET values.

  3. Build the application:

    make
    

Note

The compiled application is written to the build subdirectory. To have the application written to a different location, the O=/path/to/build/directory option may be specified on the make command line.

21.3. Running the Application

The application requires a number of command line options:

./build/link_status_interrupt [EAL options] -- -p PORTMASK [-q NQ][-T PERIOD]

where,

  • -p PORTMASK: A hexadecimal bitmask of the ports to configure
  • -q NQ: A number of queues (=ports) per lcore (default is 1)
  • -T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default)

To run the application in a linuxapp environment with 4 lcores, 4 memory channels, 16 ports and 8 RX queues per lcore, issue the command:

$ ./build/link_status_interrupt -c f -n 4-- -q 8 -p ffff

Refer to the DPDK Getting Started Guide for general information on running applications and the Environment Abstraction Layer (EAL) options.

21.4. Explanation

The following sections provide some explanation of the code.

21.4.1. Command Line Arguments

The Link Status Interrupt sample application takes specific parameters, in addition to Environment Abstraction Layer (EAL) arguments (see Section Running the Application).

Command line parsing is done in the same way as it is done in the L2 Forwarding Sample Application. See Command Line Arguments for more information.

21.4.2. Mbuf Pool Initialization

Mbuf pool initialization is done in the same way as it is done in the L2 Forwarding Sample Application. See Mbuf Pool Initialization for more information.

21.4.3. Driver Initialization

The main part of the code in the main() function relates to the initialization of the driver. To fully understand this code, it is recommended to study the chapters that related to the Poll Mode Driver in the DPDK Programmer’s Guide and the DPDK API Reference.

if (rte_eal_pci_probe() < 0)
    rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");

nb_ports = rte_eth_dev_count();
if (nb_ports == 0)
    rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");

/*
 * Each logical core is assigned a dedicated TX queue on each port.
 */

for (portid = 0; portid < nb_ports; portid++) {
    /* skip ports that are not enabled */

    if ((lsi_enabled_port_mask & (1 << portid)) == 0)
        continue;

    /* save the destination port id */

    if (nb_ports_in_mask % 2) {
        lsi_dst_ports[portid] = portid_last;
        lsi_dst_ports[portid_last] = portid;
    }
    else
        portid_last = portid;

    nb_ports_in_mask++;

    rte_eth_dev_info_get((uint8_t) portid, &dev_info);
}

Observe that:

  • rte_eal_pci_probe() parses the devices on the PCI bus and initializes recognized devices.

The next step is to configure the RX and TX queues. For each port, there is only one RX queue (only one lcore is able to poll a given port). The number of TX queues depends on the number of available lcores. The rte_eth_dev_configure() function is used to configure the number of queues for a port:

ret = rte_eth_dev_configure((uint8_t) portid, 1, 1, &port_conf);
if (ret < 0)
    rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%u\n", ret, portid);

The global configuration is stored in a static structure:

static const struct rte_eth_conf port_conf = {
    .rxmode = {
        .split_hdr_size = 0,
        .header_split = 0,   /**< Header Split disabled */
        .hw_ip_checksum = 0, /**< IP checksum offload disabled */
        .hw_vlan_filter = 0, /**< VLAN filtering disabled */
        .hw_strip_crc= 0,    /**< CRC stripped by hardware */
    },
    .txmode = {},
    .intr_conf = {
        .lsc = 1, /**< link status interrupt feature enabled */
    },
};

Configuring lsc to 0 (the default) disables the generation of any link status change interrupts in kernel space and no user space interrupt event is received. The public interface rte_eth_link_get() accesses the NIC registers directly to update the link status. Configuring lsc to non-zero enables the generation of link status change interrupts in kernel space when a link status change is present and calls the user space callbacks registered by the application. The public interface rte_eth_link_get() just reads the link status in a global structure that would be updated in the interrupt host thread only.

21.4.4. Interrupt Callback Registration

The application can register one or more callbacks to a specific port and interrupt event. An example callback function that has been written as indicated below.

static void
lsi_event_callback(uint8_t port_id, enum rte_eth_event_type type, void *param)
{
    struct rte_eth_link link;

    RTE_SET_USED(param);

    printf("\n\nIn registered callback...\n");

    printf("Event type: %s\n", type == RTE_ETH_EVENT_INTR_LSC ? "LSC interrupt" : "unknown event");

    rte_eth_link_get_nowait(port_id, &link);

    if (link.link_status) {
        printf("Port %d Link Up - speed %u Mbps - %s\n\n", port_id, (unsigned)link.link_speed,
              (link.link_duplex == ETH_LINK_FULL_DUPLEX) ? ("full-duplex") : ("half-duplex"));
    } else
        printf("Port %d Link Down\n\n", port_id);
}

This function is called when a link status interrupt is present for the right port. The port_id indicates which port the interrupt applies to. The type parameter identifies the interrupt event type, which currently can be RTE_ETH_EVENT_INTR_LSC only, but other types can be added in the future. The param parameter is the address of the parameter for the callback. This function should be implemented with care since it will be called in the interrupt host thread, which is different from the main thread of its caller.

The application registers the lsi_event_callback and a NULL parameter to the link status interrupt event on each port:

rte_eth_dev_callback_register((uint8_t)portid, RTE_ETH_EVENT_INTR_LSC, lsi_event_callback, NULL);

This registration can be done only after calling the rte_eth_dev_configure() function and before calling any other function. If lsc is initialized with 0, the callback is never called since no interrupt event would ever be present.

21.4.5. RX Queue Initialization

The application uses one lcore to poll one or several ports, depending on the -q option, which specifies the number of queues per lcore.

For example, if the user specifies -q 4, the application is able to poll four ports with one lcore. If there are 16 ports on the target (and if the portmask argument is -p ffff), the application will need four lcores to poll all the ports.

ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0, &rx_conf, lsi_pktmbuf_pool);
if (ret < 0)
    rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: err=%d, port=%u\n", ret, portid);

The list of queues that must be polled for a given lcore is stored in a private structure called struct lcore_queue_conf.

struct lcore_queue_conf {
    unsigned n_rx_port;
    unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE]; unsigned tx_queue_id;
    struct mbuf_table tx_mbufs[LSI_MAX_PORTS];
} rte_cache_aligned;

struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];

The n_rx_port and rx_port_list[] fields are used in the main packet processing loop (see Receive, Process and Transmit Packets).

The global configuration for the RX queues is stored in a static structure:

static const struct rte_eth_rxconf rx_conf = {
    .rx_thresh = {
        .pthresh = RX_PTHRESH,
        .hthresh = RX_HTHRESH,
        .wthresh = RX_WTHRESH,
    },
};

21.4.6. TX Queue Initialization

Each lcore should be able to transmit on any port. For every port, a single TX queue is initialized.

/* init one TX queue logical core on each port */

fflush(stdout);

ret = rte_eth_tx_queue_setup(portid, 0, nb_txd, rte_eth_dev_socket_id(portid), &tx_conf);
if (ret < 0)
    rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup: err=%d,port=%u\n", ret, (unsigned) portid);

The global configuration for TX queues is stored in a static structure:

static const struct rte_eth_txconf tx_conf = {
    .tx_thresh = {
        .pthresh = TX_PTHRESH,
        .hthresh = TX_HTHRESH,
        .wthresh = TX_WTHRESH,
    },
    .tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */
};

21.4.7. Receive, Process and Transmit Packets

In the lsi_main_loop() function, the main task is to read ingress packets from the RX queues. This is done using the following code:

/*
 *   Read packet from RX queues
 */

for (i = 0; i < qconf->n_rx_port; i++) {
    portid = qconf->rx_port_list[i];
    nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst, MAX_PKT_BURST);
    port_statistics[portid].rx += nb_rx;

    for (j = 0; j < nb_rx; j++) {
        m = pkts_burst[j];
        rte_prefetch0(rte_pktmbuf_mtod(m, void *));
        lsi_simple_forward(m, portid);
    }
}

Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst() function writes the mbuf pointers in a local table and returns the number of available mbufs in the table.

Then, each mbuf in the table is processed by the lsi_simple_forward() function. The processing is very simple: processes the TX port from the RX port and then replaces the source and destination MAC addresses.

Note

In the following code, the two lines for calculating the output port require some explanation. If portId is even, the first line does nothing (as portid & 1 will be 0), and the second line adds 1. If portId is odd, the first line subtracts one and the second line does nothing. Therefore, 0 goes to 1, and 1 to 0, 2 goes to 3 and 3 to 2, and so on.

static void
lsi_simple_forward(struct rte_mbuf *m, unsigned portid)
{
    struct ether_hdr *eth;
    void *tmp;
    unsigned dst_port = lsi_dst_ports[portid];

    eth = rte_pktmbuf_mtod(m, struct ether_hdr *);

    /* 02:00:00:00:00:xx */

    tmp = &eth->d_addr.addr_bytes[0];

    *((uint64_t *)tmp) = 0x000000000002 + (dst_port << 40);

    /* src addr */
    ether_addr_copy(&lsi_ports_eth_addr[dst_port], &eth->s_addr);

    lsi_send_packet(m, dst_port);
}

Then, the packet is sent using the lsi_send_packet(m, dst_port) function. For this test application, the processing is exactly the same for all packets arriving on the same RX port. Therefore, it would have been possible to call the lsi_send_burst() function directly from the main loop to send all the received packets on the same TX port using the burst-oriented send function, which is more efficient.

However, in real-life applications (such as, L3 routing), packet N is not necessarily forwarded on the same port as packet N-1. The application is implemented to illustrate that so the same approach can be reused in a more complex application.

The lsi_send_packet() function stores the packet in a per-lcore and per-txport table. If the table is full, the whole packets table is transmitted using the lsi_send_burst() function:

/* Send the packet on an output interface */

static int
lsi_send_packet(struct rte_mbuf *m, uint8_t port)
{
    unsigned lcore_id, len;
    struct lcore_queue_conf *qconf;

    lcore_id = rte_lcore_id();
    qconf = &lcore_queue_conf[lcore_id];
    len = qconf->tx_mbufs[port].len;
    qconf->tx_mbufs[port].m_table[len] = m;
    len++;

    /* enough pkts to be sent */

    if (unlikely(len == MAX_PKT_BURST)) {
        lsi_send_burst(qconf, MAX_PKT_BURST, port);
        len = 0;
    }
    qconf->tx_mbufs[port].len = len;

    return 0;
}

To ensure that no packets remain in the tables, each lcore does a draining of the TX queue in its main loop. This technique introduces some latency when there are not many packets to send. However, it improves performance:

 cur_tsc = rte_rdtsc();

 /*
  *    TX burst queue drain
  */

 diff_tsc = cur_tsc - prev_tsc;

 if (unlikely(diff_tsc > drain_tsc)) {
     /* this could be optimized (use queueid instead of * portid), but it is not called so often */

     for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
         if (qconf->tx_mbufs[portid].len == 0)
             continue;

         lsi_send_burst(&lcore_queue_conf[lcore_id],
         qconf->tx_mbufs[portid].len, (uint8_t) portid);
         qconf->tx_mbufs[portid].len = 0;
     }

     /* if timer is enabled */

     if (timer_period > 0) {
         /* advance the timer */

         timer_tsc += diff_tsc;

         /* if timer has reached its timeout */

         if (unlikely(timer_tsc >= (uint64_t) timer_period)) {
             /* do this only on master core */

             if (lcore_id == rte_get_master_lcore()) {
                 print_stats();

                 /* reset the timer */
                 timer_tsc = 0;
             }
         }
     }
     prev_tsc = cur_tsc;
}
Next Previous

Built with Sphinx using a theme provided by Read the Docs.