【安裝檔案】TRex流量分析儀保姆級安裝指南--基於VMware虛擬機器器(ubantu18.04@Intel 82545EM)

2022-11-10 21:00:10

前言

既然你已經知道TRex並嘗試搜尋它的安裝教學,這意味著你有一定的基礎知識(至少知道自己需要什麼)。因此本文對於TRex的介紹部分會偏少

本次主要為TRex安裝過程的一次記錄(版本為v3.0.0),我會整理一些遇到的問題與解決思路,希望對各位有所幫助。

簡介

Cisco開源的一個使用DPDK發包的高效能測試儀。

主要的工作原理概括如下:

  1. 使用scapy來構建封包模板;或者從pcap檔案中讀取封包模板;
  2. 利用dpdk傳送封包;(重寫指定變化的部分)

其兼具了python構建流的效率和dpdk發包的高效能。

TRex安裝

TRex是一個Linux環境下的軟體,安裝環境無非兩種:物理機虛擬機器器

本文主要基於虛擬機器器介紹TRex的安裝

而使用虛擬機器器又可以分為兩種安裝環境:

  • 本地虛擬機器器

    基於本地VMware構建的虛擬機器器環境

  • 遠端伺服器虛擬機器器

    基於VMware EXCI構建的遠端虛擬機器器環境

兩種方式最大的不同在網路卡新增時,VMware EXCI需要使用ifcofig讓系統識別出網路卡,詳見[7]

其餘的流程差不多,本文著重介紹本地VMware基於ubantu18.04安裝TRex的流程

如果你在VMware EXCI安裝遇到了問題,也歡迎留言討論

基於本地虛擬機器器的安裝過程

如何獲得一個本地VMware虛擬機器器(包括換源等操作)請自行Google或參考[1]

我使用的VMware版本是VMwareWorkStation PRO16,虛擬機器器映象為Ubantu18.04桌面版

虛擬機器器設定

先列一下我使用的虛擬機器器設定(圖掛了看錶)

裝置                    摘要
記憶體                    8 GB
處理器                   4
硬碟(SCSI)              35 GB
CD/DVD 2 (SATA)       自動檢測
CD/DVD (SATA)         自動檢測
軟碟                   自動檢測
網路介面卡                NAT
網路介面卡2               NAT
網路介面卡3               NAT
USB控制器                存在
音效卡                   自動檢測
印表機                   存在
顯示器                 自動檢測

注:因為安裝之後的測試需要用到多塊網路卡,故此處新增了3塊網路卡,一塊作為Linux核心的主網路卡,其餘兩塊用於繫結uio驅動進行測試。

虛擬機器器的網路卡可以隨時新增或刪除,只要保證其模式始終為NAT即可

虛擬機器器設定完成後,推薦使用VScode連線並操作,因為之後涉及對組態檔的修改,會比較方便。具體請看:https://www.cnblogs.com/DAYceng/p/16867325.html

當然也可以直接在VMware提供的介面中操作

獲取TRex

在你喜歡的目錄下建立一個資料夾存放安裝檔案

mkdir trex
cd trex

下載TRex最新分支並解壓

root@ubuntu:/root/trex# wget --no-check-certificate https://trex-tgn.cisco.com/trex/release/latest

root@ubuntu:/root/trex# tar -zxvf latest

注:latest檔案下載過程很慢,掛梯子也很慢,可以先在本地存取https://trex-tgn.cisco.com/trex/release/latest把檔案下載下來再傳到虛擬機器器上

得到以下目錄

root@ubuntu:/root/trex# ls
latest  v3.00

進入解壓後的資料夾,使用指令碼檢視當前可用的網路卡

root@ubuntu:/root/trex# cd v3.00
root@ubuntu:/root/trex/v3.00#
root@ubuntu:/root/trex/v3.00#sudo ./dpdk_setup_ports.py -s
Network devices using DPDK-compatible driver
============================================
<none>

Network devices using kernel driver
===================================
0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper)' if=ens33 drv=e1000 unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' drv=e1000 unused=igb_uio,vfio-pci,uio_pci_generic
0000:02:07.0 '82545EM Gigabit Ethernet Controller (Copper)' drv=e1000 unused=igb_uio,vfio-pci,uio_pci_generic

Other network devices
=====================
<none>

root@ubuntu:/root/trex/v3.00# 

如果之前新增了網路卡,這裡就可以看見有3個網路卡(沒新增的現在再加也行),* Active *為Linux核心正在使用的網路卡,另外兩張是未啟用的網路卡(先不要啟用),如果有需要請檢視[7]

安裝dpdk

實際上這部分和獨立安裝dpdk差別不大

前面的簡介中有提到,TRex可以大致分為兩部分,一部分是用來構造封包的【基於scapy】,另一部分則是用來傳送封包【基於DPDK】

安裝gcc環境

現在我們先來安裝DPDK部分,首先安裝gcc環境

root@ubuntu:/root/trex/v3.00#
	sudo apt install build-essential
	sudo apt install make 
	sudo apt-get install libnuma-dev
下載dpdk
root@ubuntu:/root/trex/v3.00# wget http://fast.dpdk.org/rel/dpdk-18.11.9.tar.xz
root@ubuntu:/root/trex/v3.00# tar xvJf dpdk-18.11.9.tar.xz
root@ubuntu:/root/trex/v3.00# cd dpdk-stable-18.11.9/
root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9#
新增環境變數

在dpdk的檔案目錄下設定環境變數,否則會導致之後用指令碼安裝dpdk構建環境時出錯

root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9# export RTE_SDK=`pwd`
root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9# export DESTDIR=`pwd`
root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9# export RTE_TARGET=x86_64-default-linuxapp-gcc

載入uio驅動

# 載入uio驅動
root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9# modprobe uio
使用指令碼安裝dpdk

使用dpdk-setup.sh指令碼進行安裝

root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9# ./usertools/dpdk-setup.sh
------------------------------------------------------------------------------
 RTE_SDK exported as /root/dpdk
------------------------------------------------------------------------------
----------------------------------------------------------
 Step 1: Select the DPDK environment to build
----------------------------------------------------------
[1] arm64-armv8a-linuxapp-clang
[2] arm64-armv8a-linuxapp-gcc
[3] arm64-dpaa2-linuxapp-gcc
[4] arm64-dpaa-linuxapp-gcc
[5] arm64-stingray-linuxapp-gcc
[6] arm64-thunderx-linuxapp-gcc
[7] arm64-xgene1-linuxapp-gcc
[8] arm-armv7a-linuxapp-gcc
[9] i686-native-linuxapp-gcc
[10] i686-native-linuxapp-icc
[11] ppc_64-power8-linuxapp-gcc
[12] x86_64-native-bsdapp-clang
[13] x86_64-native-bsdapp-gcc
[14] x86_64-native-linuxapp-clang
[15] x86_64-native-linuxapp-gcc
[16] x86_64-native-linuxapp-icc
[17] x86_x32-native-linuxapp-gcc

----------------------------------------------------------
 Step 2: Setup linuxapp environment
----------------------------------------------------------
[18] Insert IGB UIO module
[19] Insert VFIO module
[20] Insert KNI module
[21] Setup hugepage mappings for non-NUMA systems
[22] Setup hugepage mappings for NUMA systems
[23] Display current Ethernet/Crypto device settings
[24] Bind Ethernet/Crypto device to IGB UIO module
[25] Bind Ethernet/Crypto device to VFIO module
[26] Setup VFIO permissions

----------------------------------------------------------
 Step 3: Run test application for linuxapp environment
----------------------------------------------------------
[27] Run test application ($RTE_TARGET/app/test)
[28] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd)

----------------------------------------------------------
 Step 4: Other tools
----------------------------------------------------------
[29] List hugepage info from /proc/meminfo

----------------------------------------------------------
 Step 5: Uninstall and system cleanup
----------------------------------------------------------
[30] Unbind devices from IGB UIO or VFIO driver
[31] Remove IGB UIO module
[32] Remove VFIO module
[33] Remove KNI module
[34] Remove hugepage mappings

[35] Exit Script

Option:
步驟一

根據虛擬機器器的環境選擇相應的build,例如我的虛擬機器器是64位元Intel架構的環境,則選擇[15] x86_64-native-linuxapp-gcc

...
Installation in /root/dpdk/ complete
------------------------------------------------------------------------------
 RTE_TARGET exported as x86_64-native-linuxapp-gcc
------------------------------------------------------------------------------

Press enter to continue ...

注:若此時輸出資訊為以下情況(報錯「Installation cannot run with T defined and DESTDIR undefined」)

  INSTALL-APP dpdk-test-eventdev
  INSTALL-MAP dpdk-test-eventdev.map
Build complete [x86_64-native-linuxapp-gcc]
Installation cannot run with T defined and DESTDIR undefined
------------------------------------------------------------------------------
 RTE_TARGET exported as x86_64-native-linuxapp-gcc
--------------------------------------------

Press enter to continue ...

請先新增環境變數並再次執行[15]

步驟二
  • 選擇[18]載入igb_uio模組
Unloading any existing DPDK UIO module

Loading DPDK UIO module

Press enter to continue ...
  • 選擇[19]載入VFIO module
Unloading any existing VFIO module

Loading VFIO module

chmod /dev/vfio

OK

Press enter to continue ...
  • 選擇[20]載入KNI module
Unloading any existing DPDK KNI module

Loading DPDK KNI module

Press enter to continue ...
  • 選擇[21]來建立Hugepage
Option: 21

Removing currently reserved hugepages
Unmounting /mnt/huge and removing directory

  Input the number of 2048kB hugepages
  Example: to have 128MB of hugepages available in a 2MB huge page system,
  enter '64' to reserve 64 * 2MB pages
Number of pages: 1024
Reserving hugepages
Creating /mnt/huge and mounting as hugetlbfs

Press enter to continue ...
  • 選擇[24]來繫結PCI網路卡
Option: 24


Network devices using kernel driver
===================================
0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens33 drv=e1000 unused=igb_uio,vfio-pci *Active*
0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens38 drv=e1000 unused=igb_uio,vfio-pci 
0000:02:07.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens39 drv=e1000 unused=igb_uio,vfio-pci 

No 'Crypto' devices detected
============================

No 'Eventdev' devices detected
==============================

No 'Mempool' devices detected
=============================

No 'Compress' devices detected
==============================

Enter PCI address of device to bind to IGB UIO driver: 02:06.0
OK

Press enter to continue ...

======================================================================
======================================================================
Option: 24


Network devices using DPDK-compatible driver
============================================
0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' drv=igb_uio unused=e1000,vfio-pci

Network devices using kernel driver
===================================
0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens33 drv=e1000 unused=igb_uio,vfio-pci *Active*
0000:02:07.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens39 drv=e1000 unused=igb_uio,vfio-pci 

No 'Crypto' devices detected
============================

No 'Eventdev' devices detected
==============================

No 'Mempool' devices detected
=============================

No 'Compress' devices detected
==============================

Enter PCI address of device to bind to IGB UIO driver: 02:07.0
OK

Press enter to continue ...

PCI網路卡的drv=igb_uio即完成繫結

步驟三

測試,選擇[27][28]

Option: 27


  Enter hex bitmask of cores to execute test app on
  Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 0x3
Launching app
sudo: x86_64-default-linuxapp-gcc/app/test: command not found

Press enter to continue ...
Option: 28


  Enter hex bitmask of cores to execute testpmd app on
  Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 0x3
Launching app
sudo: x86_64-default-linuxapp-gcc/app/testpmd: command not found

這兩個測試都有可能會出現"command not found"的報錯提醒,

其中[27]出現該錯誤不用理會,[28]若出現的話,可以退出指令碼(選[35]),到/[你的DPDK目錄]/x86_64-native-linuxapp-gcc/app下找到testpmd,再執行它進行測試即可,若無以外會得到以下輸出

root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9/x86_64-native-linuxapp-gcc/app# ./testpmd
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:07.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:0C:29:AE:BF:4D
Configuring Port 1 (socket 0)
Port 1: 00:0C:29:AE:BF:43
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=256 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=256 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
Press enter to exit
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 44             RX-dropped: 0             RX-total: 44
  TX-packets: 34             TX-dropped: 0             TX-total: 34
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 59             RX-dropped: 0             RX-total: 59
  TX-packets: 24             TX-dropped: 0             TX-total: 24
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 103            RX-dropped: 0             RX-total: 103
  TX-packets: 58             TX-dropped: 0             TX-total: 58
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Done

Shutting down port 1...
Closing ports...
Done

Bye...
執行testpmd可能出現的問題

1、HugePage容量問題

當執行測試時或者testpmd,可能會遇到如下問題

root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9/x86_64-native-linuxapp-gcc/app# ./testpmd
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-2048kB
EAL: No free hugepages reported in hugepages-2048kB
EAL: FATAL: Cannot get hugepage information.
EAL: Cannot get hugepage information.
PANIC in main():
Cannot init EAL
5: [./testpmd(_start+0x29) [0x498829]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7f8a0fee0830]]
3: [./testpmd(main+0xc48) [0x48f528]]
2: [./testpmd(__rte_panic+0xbb) [0x47eb09]]
1: [./testpmd(rte_dump_stack+0x2b) [0x5c8a1b]]
Aborted (core dumped)

這說明Hugepage不夠用,可以先檢視系統記憶體狀況

root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9/x86_64-native-linuxapp-gcc/app# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:    1024
HugePages_Free:      775
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:         2097152 kB

如果不夠用可以按需修改

root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9/x86_64-native-linuxapp-gcc/build/kernel/linux# echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
root@ubuntu:/root/trex/v3.00/dpdk-stable-18.11.9/x86_64-native-linuxapp-gcc/build/kernel/linux# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:    1448
HugePages_Free:     1199
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:         2965504 kB

此時再去指令碼中執行測試[28]會得到以下輸出

Option: 28


  Enter hex bitmask of cores to execute testpmd app on
  Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 0x3
Launching app
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:07.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:0C:29:AE:BF:4D
Configuring Port 1 (socket 0)
Port 1: 00:0C:29:AE:BF:43
Checking link statuses...
Done
testpmd>

這是正常的,包括"EAL: No free hugepages reported in hugepages-1048576kB"提示也是正常的

之後再去/[你的DPDK目錄]/x86_64-native-linuxapp-gcc/app下執行testpmd可得到與本文一致的結果

2、82545EM 虛擬網路卡問題

若物理機的網路卡為82545EM,使用VMware建立虛擬機器器後,該網路卡對於dpdk的支援會存在問題,

報錯為"Eal:Error reading from file descriptor 33: Input/output error"

具體見[5]的分析

為了跳過dpdk對82545EM某些相容性的檢測,我們需要修改dpdk的組態檔igb_uio.c

該檔案位於/[你的DPDK目錄]//kernel/linux/igb_uio/igb_uio.c

使用vscode開啟該檔案(或者vim也行)

定位到

if (pci_intx_mask_supported(udev->pdev) ,大概是第260行

然後修改為if (pci_intx_mask_supported(udev->pdev)|| 1),如下:

#endif
	/* falls through - to INTX 修改*/
	case RTE_INTR_MODE_LEGACY:
		if (pci_intx_mask_supported(udev->pdev)|| 1) {
			dev_dbg(&udev->pdev->dev, "using INTX");
			udev->info.irq_flags = IRQF_SHARED | IRQF_NO_THREAD;
			udev->info.irq = udev->pdev->irq;
			udev->mode = RTE_INTR_MODE_LEGACY;
			break;
		}

然後重新執行編譯

modprobe uio

並重新執行指令碼中的[15]項,安裝環境

解決

啟動TRex

編輯組態檔

sudo cp cfg/simple_cfg.yaml  /etc/trex_cfg.yaml
sudo vim /etc/trex_cfg.yaml
- port_limit      : 2
  version         : 2
#List of interfaces. Change to suit your setup. Use ./dpdk_setup_ports.py -s to see available options
  interfaces    : ["02:06.0","02:07.0"] #將之前繫結uio驅動的網路卡新增到這裡
  port_info       :  # Port IPs. Change to suit your needs. In case of loopback, you can leave as is.
          - ip         : 1.1.1.1
            default_gw : 2.2.2.2
          - ip         : 2.2.2.2
            default_gw : 1.1.1.1

開啟兩個終端,分別執行以下指令啟動stateless伺服器

root@ubuntu:/root/trex/v3.00# sudo ./t-rex-64 -i
root@ubuntu:/root/trex/v3.00# sudo ./trex-console

效果如下(圖掛了看程式碼塊)

##伺服器啟動
-Per port stats table 
      ports |               0 |               1 
 -----------------------------------------------------------------------------------------
   opackets |               0 |               0 
     obytes |               0 |               0 
   ipackets |               0 |               0 
     ibytes |               0 |               0 
    ierrors |               0 |               0 
    oerrors |               0 |               0 
      Tx Bw |       0.00  bps |       0.00  bps 

-Global stats enabled 
 Cpu Utilization : 0.0  %
 Platform_factor : 1.0  
 Total-Tx        :       0.00  bps  
 Total-Rx        :       0.00  bps  
 Total-PPS       :       0.00  pps  
 Total-CPS       :       0.00  cps  

 Expected-PPS    :       0.00  pps  
 Expected-CPS    :       0.00  cps  
 Expected-BPS    :       0.00  bps  

 Active-flows    :        0  Clients :        0   Socket-util : 0.0000 %    
 Open-flows      :        0  Servers :        0   Socket :        0 Socket/Clients :  -nan 
 drop-rate       :       0.00  bps   
 current time    : 63.5 sec  
 test duration   : 0.0 sec  
 *** TRex is shutting down - cause: 'CTRL + C detected'
 All cores stopped !! 
Killing Scapy server... Scapy server is killed
root@ubuntu:/root/trex/v3.00# 
##console啟動
Using 'python3' as Python interpeter


Connecting to RPC server on localhost:4501                   [SUCCESS]


Connecting to publisher server on localhost:4500             [SUCCESS]


Acquiring ports [0, 1]:                                      [SUCCESS]


Server Info:

Server version:   v3.00 @ STL
Server mode:      Stateless
Server CPU:       1 x Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz
Ports count:      2 x 1Gbps @ 82545EM Gigabit Ethernet Controller (Copper)

-=TRex Console v3.0=-

Type 'help' or '?' for supported actions

trex>quit
Shutting down RPC client

root@ubuntu:/home/ag/trex/v3.00#

到此,TRex的安裝流程結束

但是這隻相當於走完了"helloworld",更多的使用方法請參考[0]

感謝各個社群的內容分享,參考資料我已經在文末給出

參考資料與註釋

[0]TRex官方檔案https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_hardware_recommendations

[1]https://www.cnblogs.com/hanyanling/p/13364204.html

[2]http://www.isimble.com/2018/11/15/dpdk-setup/

[3]https://blog.51cto.com/feishujun/5573292

[4]https://dev.to/dannypsnl/dpdk-eal-input-output-error-1kn4

[5]https://blog.csdn.net/Longyu_wlz/article/details/121443906

[6]https://blog.csdn.net/yb890102/article/details/127587910

[7]

注:未啟用的網路卡在終端中使用ifconfig命令是查不到的,需要先使用ip addr找出網路卡,再啟用,此時網路卡會變為* Active *狀態

#啟用網路卡
ifconfig [網路卡名稱] up