CPUG: The Check Point User Group

Resources for the Check Point Community, by the Check Point Community.


 

Results 1 to 7 of 7

Thread: R77.30 with JHFA 216 and NFS TCP/UDP

  1. #1
    Join Date
    2006-09-26
    Posts
    2,827
    Rep Power
    13

    Default R77.30 with JHFA 216 and NFS TCP/UDP

    Scenario:
    Linux NFS client inside the firewall and Linux NFS server outside the firewall. I have rule
    to allow NFS traffics between client and server and vice versa. The rule location is on top
    of the rule base.

    I have a NFS share on the NFS server and I run the following commands on the NFS client:

    rm -f /var/log/tmp/*
    cd /var/log/tmp
    umount /mnt/tmp
    mount -t nfs -o udp 10.109.114.70:/tmp/tmp /mnt/tmp
    cp /mnt/tmp/* . &

    With this, when the files are being copied from the server back to the client, I can see in the
    output of "fwaccel conns | grep 10.109.114.70" that the connections are being accelerated by
    SecureXL and yet "fwaccel stats -s" shows that the connections are being acclerated:

    fwaccel stats -s
    Accelerated conns/Total conns : 12/30 (40%)
    Accelerated pkts/Total pkts : 128666/3084022 (4%)
    F2Fed pkts/Total pkts : 2955356/3084022 (95%)
    PXL pkts/Total pkts : 0/3084022 (0%)
    QXL pkts/Total pkts : 0/3084022 (0%)

    fwaccel conns | grep 10.109.114.70 | grep 10.7.25.4
    10.7.25.4 12230 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 811 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 871 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 46436 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 51446 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 811 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 958 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 958 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 51446 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 600 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 600 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 871 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 46436 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 12230 17 ...A....... 19/16 16/19 4 0

    This also confirms that I have high CPUs as well because traffics are not being acclerated


    On the other hand, with the same scenario, if I am using NFS mount using TCP than I can see
    traffic is being accelerated and that everything is correct:

    rm -f /var/log/tmp/*
    cd /var/log/tmp
    umount /mnt/tmp
    mount -t nfs -o tcp 10.109.114.70:/tmp/tmp /mnt/tmp
    cp /mnt/tmp/* . &

    fwaccel conns | grep 10.109.114.70 | grep 10.7.25.4
    10.109.114.70 111 10.7.25.4 29417 6 F..A....... 19/16 16/19 5 0
    10.109.114.70 111 10.7.25.4 36625 6 F..A....... 19/16 16/19 5 0
    10.7.25.4 852 10.109.114.70 2049 6 ...A....... 19/16 16/19 5 0
    10.7.25.4 29417 10.109.114.70 111 6 F..A....... 19/16 16/19 5 0
    10.7.25.4 821 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 52583 10.109.114.70 111 6 F..A....... 19/16 16/19 5 0
    10.109.114.70 111 10.7.25.4 52583 6 F..A....... 19/16 16/19 5 0
    10.109.114.70 623 10.7.25.4 821 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 60831 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 60831 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 852 6 ...A....... 19/16 16/19 5 0
    10.7.25.4 36625 10.109.114.70 111 6 F..A....... 19/16 16/19 5 0

    fwaccel stats -s
    Accelerated conns/Total conns : 13/33 (39%)
    Accelerated pkts/Total pkts : 2546539/2551399 (99%)
    F2Fed pkts/Total pkts : 4860/2551399 (0%)
    PXL pkts/Total pkts : 0/2551399 (0%)
    QXL pkts/Total pkts : 0/2551399 (0%)


    Anyone has any ideas why I am seeing this?

    I am going to open a TAC case with Checkpoint on this but I am not having much luck with them in the past ten tickets.

  2. #2
    Join Date
    2014-06-18
    Location
    Kiel
    Posts
    10
    Rep Power
    0

    Default Re: R77.30 with JHFA 216 and NFS TCP/UDP

    Quote Originally Posted by cciesec2006 View Post
    With this, when the files are being copied from the server back to the client, I can see in the
    output of "fwaccel conns | grep 10.109.114.70" that the connections are being accelerated by
    SecureXL and yet "fwaccel stats -s" shows that the connections are being acclerated:

    fwaccel stats -s
    Accelerated conns/Total conns : 12/30 (40%)
    Accelerated pkts/Total pkts : 128666/3084022 (4%)
    F2Fed pkts/Total pkts : 2955356/3084022 (95%)
    PXL pkts/Total pkts : 0/3084022 (0%)
    QXL pkts/Total pkts : 0/3084022 (0%)

    fwaccel conns | grep 10.109.114.70 | grep 10.7.25.4
    10.7.25.4 12230 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 811 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 871 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 46436 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 51446 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 811 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 958 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 958 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 51446 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 600 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 600 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 871 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 46436 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 12230 17 ...A....... 19/16 16/19 4 0

    This also confirms that I have high CPUs as well because traffics are not being acclerated
    The lines with host 10.109.114.70 show that this traffic is accelerated because the "F" is not set. You see "A" for accounting. Your are right with this point.

    We see 40 % of connections being accelerated, that is nearly the same as for UDP. But we also see a difference in the number of accelerated packets. The listed connections seem to be the accelerated ones. What about the remaining connections? There must be some with an "F" flag. These seem to be where the traffic goes.

    Maybe you are right and you face a Check Point Problem. But one cannot prove this by the data you presented. We do not know your traffic mix in this setup. 18 connections seem not to be accelerated. If the output of the command is complete, they are not concerning 10.109.114.70. If there is no failure of Check Point's output then these 18 connections handle 95 % of the packets. I would suggest you shed a view on them.

  3. #3
    Join Date
    2006-09-26
    Posts
    2,827
    Rep Power
    13

    Default Re: R77.30 with JHFA 216 and NFS TCP/UDP

    Quote Originally Posted by ofink View Post
    Maybe you are right and you face a Check Point Problem. But one cannot prove this by the data you presented. We do not know your traffic mix in this setup. 18 connections seem not to be accelerated. If the output of the command is complete, they are not concerning 10.109.114.70. If there is no failure of Check Point's output then these 18 connections handle 95 % of the packets. I would suggest you shed a view on them.
    .

    Let me expand this further. I am staging these firewalls prior to putting them into production. There are other traffics currently traversing this firewalls but mainly Microsoft RDP, ssh, and DNS that I use for testing purposes. Those traffics are very tiny compare to NFS traffics. I am constantly copying files from the NFS server back to the NFS client. therefore, you can safely ignore the "Accelerated conns/Total conns : 12/30 (40%)" because those traffics are extremely small in term of volume in comparison to the the volume of NFS traffics.

    Furthermore, 10.109.114.70 is the IP address of the NFS server so I am grepping for everything between the client and the server and I am not seeing any "F" in the output

    fwaccel conns | grep 10.109.114.70 | grep 10.7.25.4
    10.7.25.4 828 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 54591 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 807 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 816 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 676 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 49680 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 49680 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 807 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 54947 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 828 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 54947 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 816 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 676 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 54591 17 ...A....... 19/16 16/19 4 0
    Last edited by cciesec2006; 1 Week Ago at 07:34.

  4. #4
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    1,911
    Rep Power
    10

    Default Re: R77.30 with JHFA 216 and NFS TCP/UDP

    Quote Originally Posted by cciesec2006 View Post
    Scenario:
    fwaccel conns | grep 10.109.114.70 | grep 10.7.25.4
    10.7.25.4 12230 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 811 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 871 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 46436 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 51446 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 811 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 958 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 958 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 51446 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 600 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 600 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 871 10.109.114.70 2049 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 46436 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 12230 17 ...A....... 19/16 16/19 4 0

    This also confirms that I have high CPUs as well because traffics are not being acclerated
    In this output you happen to not be showing any SunRPC connections on port 111, so everything looks accelerated.

    fwaccel conns | grep 10.109.114.70 | grep 10.7.25.4
    10.109.114.70 111 10.7.25.4 29417 6 F..A....... 19/16 16/19 5 0
    10.109.114.70 111 10.7.25.4 36625 6 F..A....... 19/16 16/19 5 0
    10.7.25.4 852 10.109.114.70 2049 6 ...A....... 19/16 16/19 5 0
    10.7.25.4 29417 10.109.114.70 111 6 F..A....... 19/16 16/19 5 0
    10.7.25.4 821 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 52583 10.109.114.70 111 6 F..A....... 19/16 16/19 5 0
    10.109.114.70 111 10.7.25.4 52583 6 F..A....... 19/16 16/19 5 0
    10.109.114.70 623 10.7.25.4 821 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 623 10.7.25.4 60831 17 ...A....... 19/16 16/19 4 0
    10.7.25.4 60831 10.109.114.70 623 17 ...A....... 19/16 16/19 4 0
    10.109.114.70 2049 10.7.25.4 852 6 ...A....... 19/16 16/19 5 0
    10.7.25.4 36625 10.109.114.70 111 6 F..A....... 19/16 16/19 5 0
    In this output you are showing the SunRPC port 111 portmapper traffic as well (emphasis added above) which cannot ever be accelerated. DCE/RPC traffic will always be sent F2F by SecureXL (and this traffic can't be templated either, even in R80.10) so that the firewall can sniff the dynamic port allocation for the nfsprog RPC program 100003 and pinhole the appropriate port(s) for the subsequent NFS connection. This is analogous to the FTP control connection on port 21 which must go F2F for the same reason, to sniff for dynamic ftp-data port allocations. Just like with FTP very little data is sent on control port 111 which is F2F; the overwhelming majority of data is sent and received on whatever port nfsd was allocated, which *can* be accelerated via SXL just as the ftp-data connections can.

    In a lab environment with limited "real" traffic traversing the firewall, it is not unusual to see high F2F numbers when direct connections to and from the firewall itself (ssh, https, dns) comprise the majority of traffic. That is because if SecureXL sees that the destination IP in the packet is one of the firewall's actual interfaces (and that traffic is not involved with a "Hide behind Gateway" Hide NAT), SecureXL will always immediately punt that traffic F2F which increments the F2F counter in "fwaccel stats -s". Templating is also not allowed for traffic bound directly to a firewall NIC IP address, so the "Accelerated Conns" counter does not increment for these connections to the firewall itself. After this direct traffic is punted F2F by SXL, it goes through policy inspection via iI then is sent to IP for routing as usual, if it was allowed by policy. IP then sees that the destination IP on the packet is the firewall itself and handles delivery to the appropriate process on the firewall (sshd, httpd).
    --
    My book "Max Power: Check Point Firewall Performance Optimization"
    now available via http://maxpowerfirewalls.com.

  5. #5
    Join Date
    2006-09-26
    Posts
    2,827
    Rep Power
    13

    Default Re: R77.30 with JHFA 216 and NFS TCP/UDP

    Quote Originally Posted by ShadowPeak.com View Post
    In this output you happen to not be showing any SunRPC connections on port 111, so everything looks accelerated.



    In this output you are showing the SunRPC port 111 portmapper traffic as well (emphasis added above) which cannot ever be accelerated. DCE/RPC traffic will always be sent F2F by SecureXL (and this traffic can't be templated either, even in R80.10) so that the firewall can sniff the dynamic port allocation for the nfsprog RPC program 100003 and pinhole the appropriate port(s) for the subsequent NFS connection. This is analogous to the FTP control connection on port 21 which must go F2F for the same reason, to sniff for dynamic ftp-data port allocations. Just like with FTP very little data is sent on control port 111 which is F2F; the overwhelming majority of data is sent and received on whatever port nfsd was allocated, which *can* be accelerated via SXL just as the ftp-data connections can.

    In a lab environment with limited "real" traffic traversing the firewall, it is not unusual to see high F2F numbers when direct connections to and from the firewall itself (ssh, https, dns) comprise the majority of traffic. That is because if SecureXL sees that the destination IP in the packet is one of the firewall's actual interfaces (and that traffic is not involved with a "Hide behind Gateway" Hide NAT), SecureXL will always immediately punt that traffic F2F which increments the F2F counter in "fwaccel stats -s". Templating is also not allowed for traffic bound directly to a firewall NIC IP address, so the "Accelerated Conns" counter does not increment for these connections to the firewall itself. After this direct traffic is punted F2F by SXL, it goes through policy inspection via iI then is sent to IP for routing as usual, if it was allowed by policy. IP then sees that the destination IP on the packet is the firewall itself and handles delivery to the appropriate process on the firewall (sshd, httpd).

    I am not sure I am following your logic. In my lab environment, I am pushing close to 1Gig bits/sec NFS throughput. Other traffics are minimal, less than 1Mbits/sec. I see the F2F counters go up very quickly when I have NFS traffcis.

    Are you saying that the F2F counters is wrong? I clears out the counters before I test by pushing the policy. the first one I showed was UDP NFS, the 2nd one is TCP NFS

    There are no NAT between NFS client and NFS server, just pure routing. No other blades running on the gateways other than fw blade.

    I am confused.

  6. #6
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    1,911
    Rep Power
    10

    Default Re: R77.30 with JHFA 216 and NFS TCP/UDP

    Quote Originally Posted by cciesec2006 View Post
    I am not sure I am following your logic. In my lab environment, I am pushing close to 1Gig bits/sec NFS throughput. Other traffics are minimal, less than 1Mbits/sec. I see the F2F counters go up very quickly when I have NFS traffcis.

    Are you saying that the F2F counters is wrong? I clears out the counters before I test by pushing the policy. the first one I showed was UDP NFS, the 2nd one is TCP NFS

    There are no NAT between NFS client and NFS server, just pure routing. No other blades running on the gateways other than fw blade.

    I am confused.
    You are looking at counters which are designed to give a general idea of the traffic flow and trying to draw very specific conclusions.

    I'd suggest looking at the live path throughput numbers by running cpview on the active gateway and selecting Advanced...Network...Path while doing a big NFS transfer. You will be able to see live pps and Mbps per path (SXL/PXL/F2F) along with a breakdown by protocol (TCP/UDP/Other). This will give you the specifics you want in real-time; cpview also has a history mode if you want to look at prior data but the granularity will be less than running it in real time. Press C to take screenshots.
    --
    My book "Max Power: Check Point Firewall Performance Optimization"
    now available via http://maxpowerfirewalls.com.

  7. #7
    Join Date
    2006-09-26
    Posts
    2,827
    Rep Power
    13

    Default Re: R77.30 with JHFA 216 and NFS TCP/UDP

    Quote Originally Posted by ShadowPeak.com View Post
    You are looking at counters which are designed to give a general idea of the traffic flow and trying to draw very specific conclusions.

    I'd suggest looking at the live path throughput numbers by running cpview on the active gateway and selecting Advanced...Network...Path while doing a big NFS transfer. You will be able to see live pps and Mbps per path (SXL/PXL/F2F) along with a breakdown by protocol (TCP/UDP/Other). This will give you the specifics you want in real-time; cpview also has a history mode if you want to look at prior data but the granularity will be less than running it in real time. Press C to take screenshots.
    I open a TAC with checkpoint and through WebEx, the TAC engineer agrees with me that this is "not" normal but he does not have enough experience to troubleshoot further. He is asking other engineers for assistance.

    this is a lab firewall with no other traffics other than NFS traffics; therefore, the counters should be accurated as well.

    I also use "watch -d -n 1 'fwaccel stats -p" and Checkpoint TAC confirmed that the UDP violations meaning that the traffics should NOT be accelerated and confirmed with "fwaccel stats -s" and yet, in "fwaccel conns | grep 10.109.114.70" they are shown as accelerated.

    Waiting for Checkpoint to call me back

Similar Threads

  1. R77.30 with JHFA 216
    By cciesec2006 in forum Miscellaneous
    Replies: 4
    Last Post: 1 Week Ago, 23:07
  2. MDS failed to start after mds_backup in R77.30 with JHFA 205
    By cciesec2006 in forum Provider-1 (Multi-Domain Management)
    Replies: 0
    Last Post: 3 Weeks Ago, 09:32
  3. R77.30 with JHFA 205 on Dell PowerEdge R730 crashes
    By cciesec2006 in forum Installing And Upgrading
    Replies: 0
    Last Post: 2017-02-02, 16:27
  4. cron is not working with R77.30 JHFA 205
    By cciesec2006 in forum Miscellaneous
    Replies: 13
    Last Post: 2017-02-01, 23:40
  5. R777.30 JHFA 205 failed installation
    By cciesec2006 in forum Miscellaneous
    Replies: 27
    Last Post: 2016-12-23, 07:29

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •