CPUG: The Check Point User Group

Resources for the Check Point Community, by the Check Point Community.


Tim Hall has done it yet again - That's right, the 3rd edition is here!
You can read his announcement post here.
It's a massive upgrade focusing on current versions, and well worth checking out. -E

 

Page 2 of 4 FirstFirst 1234 LastLast
Results 21 to 40 of 69

Thread: very slow intervaln communication via checkpoint

  1. #21
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    14

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by alienbaby View Post
    1. What features do you have enabled on this gateway/cluster? (ie: IPS, App Control, URL filtering, DLP etc. )
    OP can run "enabled_blades" command on gateway in expert mode, output is succinct and easily copy/pasted

    2. Is the gateway and/or cluster in 64-bit mode?
    OP can run "uname -a" command from expert mode to determine this (x86 or x64)

    3. In this GAIA?
    Based on OP's earlier posting it is Gaia R77.20.

    4. can you give the output from the command 'fw ctl affinity -l -v -r' ?
    Already provided output from "fw ctl affinity -l -a" earlier.

    5. Have you increased the ring buffers on these interfaces?
    Doubtful he has, RX-DRP rate on problematic interfaces is a negligible 0.000075% and 0.000082% respectively anyway.

    6. What are the subnet masks on the affected interfaces? reference to high rx_address_match_errors counter.
    Pretty sure that counter is referring to a mismatch on the destination MAC address of inbound frames, a MAC mismatch on roughly 5-7% of inbound frames doesn't seem horribly excessive to me.

    Since the OP said it was an HP Gen8 server, I'm going to guess that eth0, eth1, eth2, and eth3 are an Intel quad 1Gbit card, and eth4 and eth5 are the built-in dual Broadcom 10GB ports which are just terrible from a performance perspective.

  2. #22
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by ShadowPeak.com View Post
    Please provide output of "ethtool -k ethX" for both interfaces, if you have TCP Segment Offloading (TSO) turned on it will seriously impact performance when certain blades are enabled. You are suffering some RX ring buffer drops but they are way way below the 0.1% threshold where you should think about increasing ring buffer size.

    Edit: Also please provide output of "ethtool -i ethX". Based on some of the counter prefixes I have a sinking feeling there are Broadcom NIC cards present; you will be facing an uphill battle getting anything resembling decent performance out of those.
    [Expert@gate02:0]# ethtool -k eth4
    Offload parameters for eth4:
    Cannot get device udp large send offload settings: Operation not supported
    Cannot get device GRO settings: Operation not supported
    rx-checksumming: on
    tx-checksumming: off
    scatter-gather: off
    tcp segmentation offload: off
    udp fragmentation offload: off
    generic segmentation offload: off
    generic-receive-offload: off


    [Expert@gate02:0]# ethtool -k eth5
    Offload parameters for eth5:
    Cannot get device udp large send offload settings: Operation not supported
    Cannot get device GRO settings: Operation not supported
    rx-checksumming: on
    tx-checksumming: off
    scatter-gather: off
    tcp segmentation offload: off
    udp fragmentation offload: off
    generic segmentation offload: off
    generic-receive-offload: off


    [Expert@gate02:0]# ethtool -i eth4
    driver: be2net
    version: 2.104.225.7
    firmware-version: 4.6.247.5
    bus-info: 0000:04:00.0
    [Expert@gate02:0]# ethtool -i eth5
    driver: be2net
    version: 2.104.225.7
    firmware-version: 4.6.247.5
    bus-info: 0000:04:00.1

  3. #23
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by ShadowPeak.com View Post
    OP can run "enabled_blades" command on gateway in expert mode, output is succinct and easily copy/pasted

    we are running: App control, IPS detect mode, URL filtering, IpSec vpn, identity awarness, mobil access.

    OP can run "uname -a" command from expert mode to determine this (x86 or x64)

    64 bit version of Gaia R77.20

    Based on OP's earlier posting it is Gaia R77.20.



    Already provided output from "fw ctl affinity -l -a" earlier.



    Doubtful he has, RX-DRP rate on problematic interfaces is a negligible 0.000075% and 0.000082% respectively anyway.



    Pretty sure that counter is referring to a mismatch on the destination MAC address of inbound frames, a MAC mismatch on roughly 5-7% of inbound frames doesn't seem horribly excessive to me.

    Since the OP said it was an HP Gen8 server, I'm going to guess that eth0, eth1, eth2, and eth3 are an Intel quad 1Gbit card, and eth4 and eth5 are the built-in dual Broadcom 10GB ports which are just terrible from a performance perspective.
    That i will check tomorrow

  4. #24
    Join Date
    2011-08-02
    Location
    http://spikefishsolutions.com
    Posts
    1,658
    Rep Power
    10

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by Opera View Post
    [Expert@gate02:0]#
    rx_address_match_errors: 201647740
    This was kind of a pain to find.

    rx_address_match_errors was renamed to rx_address_mismatch_drops.

    + {DRVSTAT_INFO(rx_address_mismatch_drops)},
    + /* Received packets dropped when IP packet length field is less than
    + * the IP header length field.
    + */

    This isn't (or is it? Its been a long day) for the 10gig driver, just as a ref to show what each of those values you mean.

    http://www.spinics.net/lists/netdev/msg187129.html

    I like shadow peeks idea that this could be related to checksum offloading. Did the driver for sure get upgrade? I think there are a lot of hits for be2net driver issues in linux.
    Last edited by jflemingeds; 2014-11-13 at 18:53.

  5. #25
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    14

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by Opera View Post
    [Expert@gate02:0]# ethtool -k eth4
    Offload parameters for eth4:
    Cannot get device udp large send offload settings: Operation not supported
    Cannot get device GRO settings: Operation not supported
    rx-checksumming: on
    tx-checksumming: off
    scatter-gather: off
    tcp segmentation offload: off
    udp fragmentation offload: off
    generic segmentation offload: off
    generic-receive-offload: off


    [Expert@gate02:0]# ethtool -k eth5
    Offload parameters for eth5:
    Cannot get device udp large send offload settings: Operation not supported
    Cannot get device GRO settings: Operation not supported
    rx-checksumming: on
    tx-checksumming: off
    scatter-gather: off
    tcp segmentation offload: off
    udp fragmentation offload: off
    generic segmentation offload: off
    generic-receive-offload: off


    [Expert@gate02:0]# ethtool -i eth4
    driver: be2net
    version: 2.104.225.7
    firmware-version: 4.6.247.5
    bus-info: 0000:04:00.0
    [Expert@gate02:0]# ethtool -i eth5
    driver: be2net
    version: 2.104.225.7
    firmware-version: 4.6.247.5
    bus-info: 0000:04:00.1
    TCP segmentation offload is off which is good. Looks like the NIC vendor is Emulex, frankly haven't done much with those at all. Looks like Broadcom unsuccessfully tried to buy Emulex in 2009, not sure what to make of that. Can't see why HP would select Emulex and not Intel for their onboard NICs unless they were the low bidder. Still suspicious of those Emulex NICs, you could try contacting Check Point TAC and see if they have a newer driver available for them that is not already included in R77.20.

  6. #26
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    14

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by jflemingeds View Post
    This was kind of a pain to find.

    rx_address_match_errors was renamed to rx_address_mismatch_drops.

    + {DRVSTAT_INFO(rx_address_mismatch_drops)},
    + /* Received packets dropped when IP packet length field is less than
    + * the IP header length field.
    + */

    This isn't (or is it? Its been a long day) for the 10gig driver, just as a ref to show what each of those values you mean.

    http://www.spinics.net/lists/netdev/msg187129.html

    I like shadow peeks idea that this could be related to checksum offloading. Did the driver for sure get upgrade? I think there are a lot of hits for be2net driver issues in linux.
    That counter may have been renamed but it looks like the older driver is still in use with the old names. I highly doubt 6% of the inbound traffic has this IP header anomaly unless it was maliciously crafted or the Emulex NIC driver is buggy.

    Opera would it be possible to try a speed test between a system on eth1 and a system on eth3? In other words exclude eth4 and eth5 completely? I have a feeling it will run much faster over those interfaces even if they are just 1Gbps, especially if they are Intels (ethtool -i eth1). I suppose you could also try turning off all offloads (ethtool -K ethX rx off tx off) though RX checksumming is a pretty basic one that has not dropped much traffic so I doubt it will help.

  7. #27
    Join Date
    2005-08-29
    Location
    Upstate NY
    Posts
    2,720
    Rep Power
    17

    Default Re: very slow intervaln communication via checkpoint

    Not a supported card from what I can see. I've also seen a lot of comments that the performance is very poor. This is not just a Check Point issue. Google be2net and see...

    Supported 10GB cards for HP:

    HP Ethernet 10Gb 2-port 560SFP+ Adapter PCI-E 10 Gbps Fiber 2
    HP NC522SFP Dual Port 10GbE Server Adapter PCI-E 10 Gbps Fiber 2
    HP NC550SFP Dual Port 10GbE Server Adapter PCI-E 10 Gbps Fiber 2
    HP NC552SFP 10GbE 2-port Ethernet Server PCI-E 10 Gbps Fiber 2

  8. #28
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    14

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by chillyjim View Post
    Not a supported card from what I can see. I've also seen a lot of comments that the performance is very poor. This is not just a Check Point issue. Google be2net and see...

    Supported 10GB cards for HP:

    HP Ethernet 10Gb 2-port 560SFP+ Adapter PCI-E 10 Gbps Fiber 2
    HP NC522SFP Dual Port 10GbE Server Adapter PCI-E 10 Gbps Fiber 2
    HP NC550SFP Dual Port 10GbE Server Adapter PCI-E 10 Gbps Fiber 2
    HP NC552SFP 10GbE 2-port Ethernet Server PCI-E 10 Gbps Fiber 2
    It may not show up in the HCL, but sk80680 & sk101206 say to contact Check Point support for an updated be2net driver. Wouldn't that sorta imply official support?

  9. #29
    Join Date
    2005-11-25
    Location
    United States, Southeast
    Posts
    857
    Rep Power
    15

    Default Re: very slow intervaln communication via checkpoint

    What kind of CPU is in this box? And how many? cat /proc/cpuinfo

    2 core license, Firewall, IPS, URL filtering, VPN, Mobility and Identity Aware enabled.

    What throughput were you hoping to achieve with this box?

  10. #30
    Join Date
    2005-08-29
    Location
    Upstate NY
    Posts
    2,720
    Rep Power
    17

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by ShadowPeak.com View Post
    It may not show up in the HCL, but sk80680 & sk101206 say to contact Check Point support for an updated be2net driver. Wouldn't that sorta imply official support?
    It would imply you can get it to work but Check Point doesn't have the same support responsibility as if it was "supported" (on the HCL).

  11. #31
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Hello,

    I have upgraded the firmware on the server just to check if this could help, and got this output after that.

    Kernel Interface table
    Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
    eth1 1500 0 8880717 0 0 0 43174687 0 0 0 BMRU
    eth3 1500 0 327741392 0 4716 0 28742262 0 0 0 BMRU
    eth4 1500 0 1113910948 981 0 0 1168876674 0 0 0 BMRU
    eth4.20 1500 0 126125344 0 0 0 97602563 0 0 0 BMRU
    eth4.21 1500 0 806704704 0 0 0 696981022 0 0 0 BMRU
    eth4.23 1500 0 387672 0 0 0 619164 0 0 0 BMRU
    eth4.29 1500 0 180562908 0 0 0 373675396 0 0 0 BMRU
    eth5 1500 0 1089791128 1 0 0 1287048007 0 0 0 BMRU
    eth5.10 1500 0 1058217952 0 0 0 1241049703 0 0 0 BMRU
    eth5.11 1500 0 1402792 0 0 0 1847902 0 0 0 BMRU
    eth5.60 1500 0 13011 0 0 0 4926 0 0 0 BMRU
    eth5.70 1500 0 13011 0 0 0 4926 0 0 0 BMRU
    eth5.87 1500 0 600235 0 0 0 567968 0 0 0 BMRU
    eth5.95 1500 0 29415783 0 0 0 43578078 0 0 0 BMRU
    lo 16436 0 10526636 0 0 0 10526636 0 0 0 LRU

  12. #32
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    14

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by Opera View Post
    Hello,

    I have upgraded the firmware on the server just to check if this could help, and got this output after that.

    Kernel Interface table
    Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
    eth1 1500 0 8880717 0 0 0 43174687 0 0 0 BMRU
    eth3 1500 0 327741392 0 4716 0 28742262 0 0 0 BMRU
    eth4 1500 0 1113910948 981 0 0 1168876674 0 0 0 BMRU
    eth4.20 1500 0 126125344 0 0 0 97602563 0 0 0 BMRU
    eth4.21 1500 0 806704704 0 0 0 696981022 0 0 0 BMRU
    eth4.23 1500 0 387672 0 0 0 619164 0 0 0 BMRU
    eth4.29 1500 0 180562908 0 0 0 373675396 0 0 0 BMRU
    eth5 1500 0 1089791128 1 0 0 1287048007 0 0 0 BMRU
    eth5.10 1500 0 1058217952 0 0 0 1241049703 0 0 0 BMRU
    eth5.11 1500 0 1402792 0 0 0 1847902 0 0 0 BMRU
    eth5.60 1500 0 13011 0 0 0 4926 0 0 0 BMRU
    eth5.70 1500 0 13011 0 0 0 4926 0 0 0 BMRU
    eth5.87 1500 0 600235 0 0 0 567968 0 0 0 BMRU
    eth5.95 1500 0 29415783 0 0 0 43578078 0 0 0 BMRU
    lo 16436 0 10526636 0 0 0 10526636 0 0 0 LRU
    When you say firmware do you mean the firmware/driver of the Emulex NICs or the server BIOS? Doesn't seem to have had much of an effect either way though. Is performance better?

  13. #33
    Join Date
    2007-06-04
    Posts
    3,314
    Rep Power
    17

    Default Re: very slow intervaln communication via checkpoint

    Just attempted an upgrade on HP DL380G7 fitted with Intel X520 to DL380G7 fitted with HP NC552SFP
    Swapping the cards as whilst the Intel NIC is on the HCL then only HP NICs supported in HP Servers.
    Was going from SPLAT R71.30 to Gaia R77.20

    The cards use the Be2net driver and had to get an updated driver from Check Point just to get the Cards recognised.
    552SFP Cards recommended by HP Specialist for the G7 as opposed to the 560SFP that from what I can see uses the same chipset as the X520 however that isn't supported in DL380G7

    Also had problems after getting the cards recognised just using things like fw monitor as could see the traffic in tcpdump (occasionally), nothing in the fw ctl zdebug and then nothing in the fw monitor. Even with SecureXL disabled then had issues with the fw monitor.

    Top showed very low utilisation. Had a 4 Core license available

    If could get traffic through the unit then was showing somewhere around 40% loss on ping test. Interfaces showed no error when doing the show interface command in Gaia

    So far not a fan of these cards. Really wish HP would support the Intel NICs or at least the 560 in the G7.

    Cards are in the HCL ( triple checked as I insisted on removing the Intel Cards in case had problems after confirming with our Check Point SE that Intel NICs in HP Server wouldn't get support from TAC ) as is the Server.

    Had to rollback and revert the system to the R71.30 with Intel X520 Cards in the end.

    Probably have to build a test system and get that working correctly before can reattempt now.

  14. #34
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Hello again,

    After all these suggestions we have planned to change the emulex 10G network cards. What make and type you guys advice to buy and replace in a HP DL360 G8 server to get better throughput on the interfaces.

  15. #35
    Join Date
    2007-06-04
    Posts
    3,314
    Rep Power
    17

    Default Re: very slow intervaln communication via checkpoint

    http://www.checkpoint.com/services/t...ails/0063.html

    Would suggest these HP 560SFP Cards. They use the same chipset and thus presumably the same ixgbe driver as the Intel X520 Cards.

    They are supposed to be compatible with the DL360G8 Systems. They are what I wanted to use unfortunately our customer has 380G7 that apparently aren't compatible with the 560SFP cards.

    I double checked with our SE regarding non-HP Cards in the HP Servers as the customer wanted to proceed with the X520 Cards however our SE confirmed that wouldn't get support on this.
    I understand that it is something driven by HP as opposed to Check Point.

  16. #36
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    14

    Default Re: very slow intervaln communication via checkpoint

    Completely agree with the suggestion for Intel-based cards, also using the Intel ixgbe driver gives you the opportunity to enable the multi-queue feature if one dedicated core is unable to keep up with a very busy 10Gbps interface. Hadn't really heard much about Emulex cards prior to this thread, but sounds like they are just as bad as Broadcom when it comes to performance.

  17. #37
    Join Date
    2006-09-26
    Posts
    3,194
    Rep Power
    17

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by ShadowPeak.com View Post
    Completely agree with the suggestion for Intel-based cards, also using the Intel ixgbe driver gives you the opportunity to enable the multi-queue feature if one dedicated core is unable to keep up with a very busy 10Gbps interface. Hadn't really heard much about Emulex cards prior to this thread, but sounds like they are just as bad as Broadcom when it comes to performance.
    I too give a BIG thumb up for Intel X520-DA dual-port 10Gig cards. Rock solid with both R75.46/R75.47 and R77.20. This is especially when it comes to NIC bonding. X520-DA2 is second to none.

  18. #38
    Join Date
    2007-06-04
    Posts
    3,314
    Rep Power
    17

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by cciesec2006 View Post
    I too give a BIG thumb up for Intel X520-DA dual-port 10Gig cards. Rock solid with both R75.46/R75.47 and R77.20. This is especially when it comes to NIC bonding. X520-DA2 is second to none.
    How do you get on with these in HP Servers or are you using non-HP Servers with them. I seem to recall that you use IBM Servers? Whilst I am a big fan of Intel NICs HP won't apparently support them anymore in HP Servers.

  19. #39
    Join Date
    2006-09-26
    Posts
    3,194
    Rep Power
    17

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by mcnallym View Post
    How do you get on with these in HP Servers or are you using non-HP Servers with them. I seem to recall that you use IBM Servers?
    You're correct. I only use Intel X520-DA2 10Gig NIC on either IBM or Dell servers. I've never had experiences with Checkpoint running on HP Servers.

    My experience with Intel X520-DA2 on both IBM and Dell servers have been rock solid for both Checkpoint and CentOS linux

  20. #40
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Hello again,

    sorry am updating this thread after long time.

    Now that i have changed the NICs and bought the 4 core lisence, still getting the same issue throughput is still not good enough. Am sending the output of some commands.

    [Expert@gate01:0]# fw ctl affinity -l -v -r

    CPU 0: eth1 (irq 59)
    fw_3
    CPU 1: eth3 (irq 51)
    fw_2
    CPU 2: eth5 (irq 234)
    fw_1
    CPU 3: eth4 (irq 210)
    fw_0
    All: dtpsd usrchkd in.acapd fwpushd fwd pepd dtlsd mpdaemon vpnd in.asessiond rad in.geod pdpd cprid cpd

    [Expert@gate01:0]# netstat -ni
    Kernel Interface table
    Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
    eth1 1500 0 18534433 0 1785 0 140899148 0 0 0 BMRU
    eth3 1500 0 39843622 0 714 0 34655347 0 0 0 BMRU
    eth4 1500 0 4650363031 9 160809 0 2953001642 0 0 0 BMRU
    eth4.20 1500 0 383931596 0 0 0 346206438 0 0 0 BMRU
    eth4.21 1500 0 2861958431 0 0 0 1545049539 0 0 0 BMRU
    eth4.23 1500 0 4701782 0 0 0 6033300 0 0 0 BMRU
    eth4.29 1500 0 1399770937 0 0 0 1055546821 0 0 0 BMRU
    eth5 1500 0 2428955654 9 17980 0 4111029579 0 0 0 BMRU
    eth5.10 1500 0 2359449093 0 0 0 3955124340 0 0 0 BMRU
    eth5.11 1500 0 4902496 0 0 0 7818996 0 0 0 BMRU
    eth5.60 1500 0 28279 0 0 0 9630 0 0 0 BMRU
    eth5.70 1500 0 28279 0 0 0 9630 0 0 0 BMRU
    eth5.87 1500 0 3598591 0 0 0 5692140 0 0 0 BMRU
    eth5.95 1500 0 60948632 0 0 0 141431689 0 0 0 BMRU
    lo 16436 0 29210189 0 0 0 29210189 0 0 0 LRU

    [Expert@gate01:0]# ethtool -i eth4
    driver: ixgbe
    version: 3.9.15-NAPI
    firmware-version: 0x80000786, 1.399.0
    bus-info: 0000:04:00.0


    [Expert@gate01:0]# ethtool -i eth5
    driver: ixgbe
    version: 3.9.15-NAPI
    firmware-version: 0x80000786, 1.399.0
    bus-info: 0000:04:00.1


    [Expert@gate01:0]# ethtool -S eth5
    NIC statistics:
    rx_packets: 2429413624
    tx_packets: 4111597376
    rx_bytes: 1344081176793
    tx_bytes: 5082865950348
    rx_errors: 9
    tx_errors: 0
    rx_dropped: 0
    tx_dropped: 0
    multicast: 96258
    collisions: 0
    rx_over_errors: 0
    rx_crc_errors: 9
    rx_frame_errors: 0
    rx_fifo_errors: 0
    rx_missed_errors: 17980
    tx_aborted_errors: 0
    tx_carrier_errors: 0
    tx_fifo_errors: 0
    tx_heartbeat_errors: 0
    rx_pkts_nic: 2429413625
    tx_pkts_nic: 4111597376
    rx_bytes_nic: 1363540217020
    tx_bytes_nic: 5116690706289
    lsc_int: 1
    tx_busy: 0
    non_eop_descs: 0
    broadcast: 163976398
    rx_no_buffer_count: 0
    tx_timeout_count: 0
    tx_restart_queue: 3166
    rx_long_length_errors: 0
    rx_short_length_errors: 0
    tx_flow_control_xon: 98
    rx_flow_control_xon: 0
    tx_flow_control_xoff: 9120
    rx_flow_control_xoff: 0
    rx_csum_offload_errors: 5
    alloc_rx_page_failed: 0
    alloc_rx_buff_failed: 0
    rx_no_dma_resources: 0
    hw_rsc_aggregated: 0
    hw_rsc_flushed: 0
    fdir_match: 0
    fdir_miss: 0
    fdir_overflow: 0
    os2bmc_rx_by_bmc: 0
    os2bmc_tx_by_bmc: 0
    os2bmc_tx_by_host: 0
    os2bmc_rx_by_host: 0
    tx_queue_0_packets: 4111597376
    tx_queue_0_bytes: 5082865950348
    rx_queue_0_packets: 2429413625
    rx_queue_0_bytes: 1344081176853

    [Expert@gate01:0]# ethtool -S eth4
    NIC statistics:
    rx_packets: 4651411704
    tx_packets: 2953908739
    rx_bytes: 5640932926379
    tx_bytes: 1926626270268
    rx_errors: 9
    tx_errors: 0
    rx_dropped: 0
    tx_dropped: 0
    multicast: 41
    collisions: 0
    rx_over_errors: 0
    rx_crc_errors: 9
    rx_frame_errors: 0
    rx_fifo_errors: 0
    rx_missed_errors: 160809
    tx_aborted_errors: 0
    tx_carrier_errors: 0
    tx_fifo_errors: 0
    tx_heartbeat_errors: 0
    rx_pkts_nic: 4651411705
    tx_pkts_nic: 2953908739
    rx_bytes_nic: 5678378139837
    tx_bytes_nic: 1952753798043
    lsc_int: 3
    tx_busy: 0
    non_eop_descs: 0
    broadcast: 38368433
    rx_no_buffer_count: 0
    tx_timeout_count: 0
    tx_restart_queue: 1882
    rx_long_length_errors: 0
    rx_short_length_errors: 0
    tx_flow_control_xon: 652
    rx_flow_control_xon: 0
    tx_flow_control_xoff: 135046
    rx_flow_control_xoff: 0
    rx_csum_offload_errors: 3101
    alloc_rx_page_failed: 0
    alloc_rx_buff_failed: 0
    rx_no_dma_resources: 0
    hw_rsc_aggregated: 0
    hw_rsc_flushed: 0
    fdir_match: 0
    fdir_miss: 0
    fdir_overflow: 0
    os2bmc_rx_by_bmc: 0
    os2bmc_tx_by_bmc: 0
    os2bmc_tx_by_host: 0
    os2bmc_rx_by_host: 0
    tx_queue_0_packets: 2953908739
    tx_queue_0_bytes: 1926626270268
    rx_queue_0_packets: 4651411705
    rx_queue_0_bytes: 5640932926699

    any further suggestions.

Page 2 of 4 FirstFirst 1234 LastLast

Similar Threads

  1. Replies: 5
    Last Post: 2014-06-27, 14:13
  2. ICA and SIC communication
    By Palanivel in forum Intermediate
    Replies: 3
    Last Post: 2013-09-10, 22:41
  3. IP addresses using to communication
    By ppawlo in forum Clustering (Security Gateway HA and ClusterXL)
    Replies: 4
    Last Post: 2010-06-10, 10:26
  4. SIC communication fail
    By d31jan in forum Check Point SecurePlatform (SPLAT)
    Replies: 3
    Last Post: 2008-08-03, 15:16
  5. PPTP Communication
    By roadrunner in forum Services (TCP, UDP, ICMP, etc.)
    Replies: 0
    Last Post: 2005-08-14, 12:09

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •