[netsa-tools-discuss] Problems with duration and flow start times in Juniper

Bartosz Iwanski biwanski at gmail.com
Fri Sep 13 05:19:38 EDT 2019


Hello Mark,
Thanks for your reply. Honestly the most problematic aspect of the entire
issue is that due to repeating nature of the flows calculating bandwidth
usage over longer periods of time accurately becomes difficult -
bytes/packets from consecutive 'subflows' get distributed over entire 'main
flow' duration and we lose granularity.
After some consideration I was personally leaning into a bit more brute
force approach of maybe modifying  rwflowpack to set maximum flow length to
for given probe to some max_duration and if the flow exceeds it, adjust
start time to end_time - max_duration and then store it appropriately -
with on/off switch via 'quirk' in config?
Would that approach be doable? I would try hacking it myself but honestly
my C skills are a bit limited and the codebase seems daunting. Do you have
any tips where to start looking?

 regards, Bartosz

czw., 12 wrz 2019 o 00:31 Mark Thomas <mthomas at cert.org> napisał(a):

> Thank you for the interesting question.
>
> I believe that any processing of these records would need to occur after
> rwflowpack has written the records to disk.  Attempting to maintain records
> in memory waiting for a potential continuation record to appear could be
> resource prohibitive.
>
> On the other hand, since SiLK stores a record by its start time, modifying
> the start time may require moving the record to a different file.  I
> suppose the data could be stored once, modified if necessary, and then
> repacked and stored again, but that is definitely not optimal.
>
> Do the Juniper records contain an information element that indicates the
> relationship of these records to each other?  If so, we may be able to
> incorporate that value into SiLK's "attributes" field that marks records
> which were closed/opened due to an active timeout.
>
> With appropriate values in the attributes field, the rwcombine tool could
> be used to join the records into a single record  (This is not what you
> suggested, but it would make the data less "unusual" in SiLK's world.
> rwcombine expects Cisco-like timestamps on continuation records, so it may
> not like the records from Juniper.)
>
> Alternatively, using "rwsort | rwgroup" to group the data by the
> five-tuple and start time, a simple PySiLK script should be able to modify
> the start times of the second through final records in each group.
>
> Thanks again for your question.  I am sorry I do not have a simple answer
> for you.
>
> -Mark
>
> -----Original Message-----
> From: Bartosz Iwanski <biwanski at gmail.com>
> Date: Thu, 29 Aug 2019 12:30:50 +0200
> To: <netsa-tools-discuss at cert.org>
> Subject: [netsa-tools-discuss] Problems with duration and flow start times
>         in Juniper
>
> Hello,
> I have encountered interesting and problematic quirk in juniper way of
> handling long running flows.
> There exists an option - active-flow-timeout that makes the device to
> export data about active flows - similar to
> ip flow-cache timeout active
> in Cisco devices.
> There is however an issue with what Juniper actually exports - the start of
> the flow does not change in the updates as it does in Cisco - Cisco exports
> a series of flows where when one ends, next one starts until the devices
> sees end of traffic.
> Juniper exports series of flows that have the same start time, but
> different end times, like that:
>
>          sIP|        dIP| packets|     bytes|                  sTime|
> duration|                  eTime|
>  192.168.1.1|   10.1.1.1|     927|
> 138680|2019/08/28T21:49:20.917|35704.926|2019/08/29T07:44:25.843|
>  192.168.1.1|   10.1.1.1|     953|
> 149352|2019/08/28T21:49:20.917|36004.401|2019/08/29T07:49:25.318|
>  192.168.1.1|   10.1.1.1|     998|
> 192608|2019/08/28T21:49:20.917|36304.894|2019/08/29T07:54:25.811|
>  192.168.1.1|   10.1.1.1|     979|
> 181192|2019/08/28T21:49:20.917|36604.890|2019/08/29T07:59:25.807|
>  192.168.1.1|   10.1.1.1|     949|
> 149784|2019/08/28T21:49:20.917|36904.572|2019/08/29T08:04:25.489|
>  192.168.1.1|   10.1.1.1|     733|
> 107448|2019/08/28T21:49:20.917|37167.538|2019/08/29T08:08:48.455|
>  192.168.1.1|   10.1.1.1|     700|
> 116048|2019/08/28T21:49:20.917|37504.815|2019/08/29T08:14:25.732|
>  192.168.1.1|   10.1.1.1|     926|
> 138432|2019/08/28T21:49:20.917|37804.504|2019/08/29T08:19:25.421|
>  192.168.1.1|   10.1.1.1|     931|
> 140568|2019/08/28T21:49:20.917|38104.520|2019/08/29T08:24:25.437|
>
> What is interesting, is that the bytes and packets fields are not added to
> previous values and but are generate per time interval.
>
> My question is  - is there a way to make silk handle this weird behavior -
> and mayby modfify start times of recieved flows to let's say 'end-time -
> active-flow-timeout' to store it as it would Cisco generated flows?
>
-------------- next part --------------
HTML attachment scrubbed and removed


More information about the netsa-tools-discuss mailing list