[netsa-tools-discuss] yafzcbalance

Emily Sarneso ecoff at cert.org
Tue Feb 28 10:41:22 EST 2017


Hello Dino,

You are seeing yafzcbalance at 100% CPU load because of the time pulse thread that is constantly checking the system time in order to timestamp the incoming packets.  You could try using the -t option to bind that thread to a particular core, but it is normal to see 100% load on that core.  The packet hashing and distribution should only take 20-30%.

I honestly don’t know if it is better to run 1 vs 4 yafzcbalances as I haven’t tested collecting on 4 separate interfaces.  The only issue I can foresee with 4 separate yafzcbalances is that if you have packets from the same flow arriving on different interfaces then they won’t be correctly assembled into 1 flow - which can make analysis a bit harder.  

I did find this comment:

https://github.com/ntop/PF_RING/issues/78

which may be of interest to you.  yafzcbalance does not have an option to change passive wait time to active wait time, but you could see if you see the same results by changing the active_time parameter to pfring_zc_run_balancer from 0 to 1.  If you see a performance change, you may be able to make similar changes to your system as was done in the above link.

Hope that helps,

Emily


--------------------
Emily Sarneso
CMU/SEI/CERT
ecoff at cert.org




> On Feb 25, 2017, at 7:26 PM, Dino Rezes <dire at fa.uni-tuebingen.de> wrote:
> 
> Hi Chris,
> 
> I'm using an Intel 82599ES 10G Card with ixgbe_zc Driver on a Ubuntu Xenial (4.4.0-63-generic).
> The PF_RING Version is 6.5.0.
> 
> For the stats I used the signal SIGUSR1 or --stats 10.
> Here an example of the stat output:
> 
> [2017-02-25 02:10:23] =========================
> Absolute Stats: Recv 1'298'559 pkts (3'568 drops) - Forwarded 877'900 pkts (166'743 drops)
> [2017-02-25 02:10:23] =========================
> => 420659
> 
> [2017-02-25 02:10:53] =========================
> Absolute Stats: Recv 3'789'236 pkts (3'568 drops) - Forwarded 3'085'470 pkts (166'743 drops)
> [2017-02-25 02:10:53] =========================
> => 703766
> 
> [2017-02-25 02:11:33] =========================
> Absolute Stats: Recv 6'602'472 pkts (3'568 drops) - Forwarded 5'467'957 pkts (166'743 drops)
> [2017-02-25 02:11:33] =========================
> => 1134515
> 
> And here is the output of ps:
> root     19901  108  0.0 639680  6420 ?        Ssl  03:09   1:46 yafzcbalance --in zc:enp1s0f0 at 0,zc:enp1s0f0 at 1,zc:enp1s0f0 at 2,zc:enp1
> root     19924  0.0  0.0 253252  5160 ?        Ssl  03:09   0:00 /usr/local/sbin/rwflowpack --sensor-configuration=/data/sensor.conf
> root     19939  2.6  0.1 578952 22964 ?        Ss   03:10   0:02 /usr/local/bin/yaf -d --live zc --in 100:0 --ipfix tcp --out localh
> root     19941  2.8  0.0 569268 13344 ?        Ss   03:10   0:02 /usr/local/bin/yaf -d --live zc --in 100:1 --ipfix tcp --out localh
> root     19943  2.8  0.0 569236 13320 ?        Ss   03:10   0:02 /usr/local/bin/yaf -d --live zc --in 100:2 --ipfix tcp --out localh
> root     19945  2.6  0.0 569268 13168 ?        Ss   03:10   0:02 /usr/local/bin/yaf -d --live zc --in 100:3 --ipfix tcp --out localh
> 
> Thank you for your help!
> Dino
> 
> 
> Am 24.02.2017 um 21:54 schrieb Chris Inacio:
>> Hi Dino,
>> 
>> What hardware are you using with the PF_RING?
>> 
>> Have you looked at the stats output of the yafzcbalance tool?  (I’m not sure how the has is performing in your case to know why the one CPU is so loaded.)
>> 
>> Thanks,
>> -- 
>> Chris Inacio
>> inacio at cert.org
>> 
>> 
>> From: Dino Rezes <dire at fa.uni-tuebingen.de>
>> Date: February 22, 2017 at 6:24:25 PM
>> To: netsa-tools-discuss at cert.org <netsa-tools-discuss at cert.org>
>> Subject:  [netsa-tools-discuss] yafzcbalance 
>> 
>>> Hello, 
>>> 
>>> we are testing a setup with ZC-Drivers and are interested in using yaf. 
>>> When I start yaf with pf_ring everything works fine and I have a quite 
>>> low cpu load (under 7%). 
>>> 
>>> But using yafzcbalance and 4 yaf instances one CPU-core is 100% busy. 
>>> Is there something wrong in the start of yafzcbalance? 
>>> Shouldn't the yaf instances do the work and yafzcbalance should 
>>> distribute the flows to the yaf workers? 
>>> 
>>> I start yafzcbalance with: 
>>> yafzcbalance --in 
>>> zc:enp1s0f0 at 0,zc:enp1s0f0 at 1,zc:enp1s0f0 at 2,zc:enp1s0f0 at 3 -c 100 -n 4 -d 
>>> 
>>> and yaf with: 
>>> yaf -d --live zc --in 100:${i} --ipfix tcp --out localhost --ipfix-port 
>>> 18000 --log /var/log/yaf/log/yaf-${i}.log --verbose --silk --pidfile 
>>> /var/log/yaf/run/yaf-${i}.pid 
>>> 
>>> Also I realized that there is a difference in packets received and 
>>> packets forwarded by yafzcbalance and this gap is getting bigger over time. 
>>> Then I started 4 instances of yafzcbalance (one for each NIC-queue) with 
>>> one yaf instance each. 
>>> This seams to work and doesn't lose packets, but I have 4 completely 
>>> busy cpu-cores. 
>>> 
>>> Best regards, 
>>> Dino 
>>> 
> 
> 



More information about the netsa-tools-discuss mailing list