CPUG: The Check Point User Group

Resources for the Check Point Community, by the Check Point Community.


First, I hope you're all well and staying safe.
Second, I want to give a "heads up" that you should see more activity here shortly, and maybe a few cosmetic changes.
I'll post more details to the "Announcements" forum soon, so be on the lookout. -E

 

Page 1 of 4 1234 LastLast
Results 1 to 20 of 69

Thread: very slow intervaln communication via checkpoint

  1. #1
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default very slow intervaln communication via checkpoint

    I have two vlans dmz and local, routing between these two vlans is defined on checkpoint (77.10 gaia). But if i copy some large files it takes long time, I am getting only 40 to 60 Mbs. But local communication between same vlan gets 700-800 Mbs.

    Is this a checkpoint routing that limits the bandwidth or NATing defined between the local and dmz vlans causes some delay.

    has anyone any clue about that.

  2. #2
    Join Date
    2007-06-04
    Posts
    3,314
    Rep Power
    18

    Default Re: very slow intervaln communication via checkpoint

    Are the interfaces showing any errors on the OS Interfaces.

    The Switches I would believe are Gig capable, are the interfaces actually showing up as 1Gb Full Duplex.


    Is there a reason that need to be doing NAT from the local to DMZ. When people insist on accessing the DMZ Server when local via it's NATted IP rather then sorting out the DNS then have seen performance hits.

    I normally suggest that if going from the local network to DMZ that use the DMZ physical IP and don't NAT.

  3. #3
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Hei,

    Interfaces are not showing any errors or any packet drops, all the switches are gigbit switches and also core switches are 10g, checkpoint is also using 10g interfaces. Actually there is no need to use NATing between dmz and local vlan, i can give it a try by removing the nating between vlans.

  4. #4
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    15

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by Opera View Post
    Hei,

    Interfaces are not showing any errors or any packet drops, all the switches are gigbit switches and also core switches are 10g, checkpoint is also using 10g interfaces. Actually there is no need to use NATing between dmz and local vlan, i can give it a try by removing the nating between vlans.
    Are you sure about that? No RX-OVR, RX-ERR, or RX-DRP nonzero counters in the output of "netstat -ni" run from the gateway? No errors on the switchports the firewall is attached to? That sure sounds like your problem here.

    Check that your interfaces are actually linked at 1Gbps with "ethtool ethX".

    Try running "top" then hit 1 to show utilization on all cores individually and start a transfer. Any of the cores maxing CPU?

    Also what is the appliance model of your gateway or is it some kind of open hardware?

  5. #5
    Join Date
    2011-08-02
    Location
    http://spikefishsolutions.com
    Posts
    1,668
    Rep Power
    11

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by ShadowPeak.com View Post
    Are you sure about that? No RX-OVR, RX-ERR, or RX-DRP nonzero counters in the output of "netstat -ni" run from the gateway? No errors on the switchports the firewall is attached to? That sure sounds like your problem here.

    Check that your interfaces are actually linked at 1Gbps with "ethtool ethX".

    Try running "top" then hit 1 to show utilization on all cores individually and start a transfer. Any of the cores maxing CPU?

    Also what is the appliance model of your gateway or is it some kind of open hardware?
    If only netstat could slice bread as well.

    I would expect to see interface errors as well. Mostly RX-OVR and RX-DRP.

  6. #6
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Hei
    the hardware is openserver HP generation 8 server with checkpoint 77.10 (gaia). I have run top and non of the cores are showing high utilization.

    here is the output of netstat -ni: eth4. is the dmz interface with som vlans defined, and eth5 is the local lan interface with vlans defined on it. main problem is between eth4.21 and eth5.10 (vlan21 and vlan10).
    Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
    eth1 1500 0 89464018 0 0 0 680076219 0 0 0 BMRU
    eth3 1500 0 5247617706 0 15396 0 456771038 0 0 0 BMRU
    eth4 1500 0 32586762622 263522 7562 0 26618260776 0 0 0 BMRU
    eth4.20 1500 0 3454390909 0 0 0 2625230906 0 0 0 BMRU
    eth4.21 1500 0 25343034894 0 0 0 16575191655 0 0 0 BMRU
    eth4.23 1500 0 14205205 0 0 0 30403466 0 0 0 BMRU
    eth4.29 1500 0 3772564299 0 0 0 7387435498 0 0 0 BMRU
    eth5 1500 0 22883581065 439 579 0 33354499718 0 0 0 BMRU
    eth5.10 1500 0 22557149232 0 0 0 32629027023 0 0 0 BMRU
    eth5.11 1500 0 14690441 0 0 0 20422968 0 0 0 BMRU
    eth5.60 1500 0 81435 0 0 0 101250 0 0 0 BMRU
    eth5.70 1500 0 1819457 0 0 0 628910 0 0 0 BMRU
    eth5.87 1500 0 15350689 0 0 0 15785403 0 0 0 BMRU
    eth5.95 1500 0 291921920 0 0 0 688534612 0 0 0 BMRU
    lo 16436 0 94350731 0 0 0 94350731 0 0 0 LRU

  7. #7
    Join Date
    2007-06-04
    Posts
    3,314
    Rep Power
    18

    Default Re: very slow intervaln communication via checkpoint

    I take it the RX-ERR and RX-DROP on the eth5 aren't incrementing.

    sk102713 Very low throughput in 10GB interfaces using IXGBE driver might be worth looking at if the 10Gb interfaces are using this driver

  8. #8
    Join Date
    2005-11-25
    Location
    United States, Southeast
    Posts
    857
    Rep Power
    16

    Default Re: very slow intervaln communication via checkpoint

    R77.10 and earlier have a crappy 10G driver...
    ixgbe 3.1.17..

    Upgrade to R77.20.. Upgrades the driver (increases performance) and gives you reboot survival ring buffer settings (lowers CPU requirements)..

  9. #9
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Ok, then i try first with upgrading from 77.10 to 77.20 and will see how it becomes. Will update the issue after update.

  10. #10
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    I have upgraded the checkpoint from R77.10 to R77.20 but still getting the same results. very low bandwidth utilization.

  11. #11
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    15

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by Opera View Post
    I have upgraded the checkpoint from R77.10 to R77.20 but still getting the same results. very low bandwidth utilization.
    Post fresh "netstat -ni" output please. Also post output of "fw ctl affinity -l -a".
    Last edited by ShadowPeak.com; 2014-11-11 at 11:41.

  12. #12
    Join Date
    2011-08-02
    Location
    http://spikefishsolutions.com
    Posts
    1,668
    Rep Power
    11

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by Opera View Post
    I have upgraded the checkpoint from R77.10 to R77.20 but still getting the same results. very low bandwidth utilization.
    Are you using interface bonding at all, if so why kind( lacp, round robin etc)? Can you show a new netstat -in (shadowpeek just beat me to the punch). Its not clear if the rx buffers are going up.


    Also can you do a top
    then hit 1
    then do a file transfer that is slow and show the top output? Just grab the whole page and copy/past.

    If you want to save this view in top hit "w" before quitting top.

    Also just to be clear the interface the two vlan are on are on the same checkpoint physical interface, which is a 10 gig interface correct?

    Also also (2nd thought?) After you stop the slow file transfer wait about 5 mins (i pulled that number out of the air) and show a new top like i said before (hit 1 before getting output). This should show how the load is with and without the file transfer. Maybe this will shed some light at to what is going on.

    also also also. Is there any chance the 40-60 is in bytes while the 700-800 is in bits? Just throwing it out there.

    also also also also (its a record) i'm assuming this is a cluster. Does the same issue show on no matter which firewall is the active member? In other words have you tried failing over and see if that changes anything?
    Last edited by jflemingeds; 2014-11-11 at 11:48.

  13. #13
    Join Date
    2005-11-25
    Location
    United States, Southeast
    Posts
    857
    Rep Power
    16

    Default Re: very slow intervaln communication via checkpoint

    R77 has a new tool called cpview.

    Use it to get some visibility into the overall performance of the box. Number of connection setups a second (cps) etc.
    Feel free to post some of those numbers..

  14. #14
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Dear ShadowPeak.com, here is the output of "netstat -ni" and "fw ctl affinity -l -a"

    Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
    eth1 1500 0 3857055 0 0 0 40478208 0 0 0 BMRU
    eth3 1500 0 564142551 0 2811 0 52309032 0 0 0 BMRU
    eth4 1500 0 2770302635 2418 2919 0 2200456647 0 0 0 BMRU
    eth4.20 1500 0 216171987 0 0 0 156419002 0 0 0 BMRU
    eth4.21 1500 0 2235635982 0 0 0 1381042104 0 0 0 BMRU
    eth4.23 1500 0 560553 0 0 0 701858 0 0 0 BMRU
    eth4.29 1500 0 317878074 0 0 0 662296708 0 0 0 BMRU
    eth5 1500 0 1967879078 3 1941 0 3044548343 0 0 0 BMRU
    eth5.10 1500 0 1953442450 0 0 0 3001199836 0 0 0 BMRU
    eth5.11 1500 0 593474 0 0 0 587768 0 0 0 BMRU
    eth5.60 1500 0 2 0 0 0 7774 0 0 0 BMRU
    eth5.70 1500 0 2 0 0 0 7774 0 0 0 BMRU
    eth5.87 1500 0 1041486 0 0 0 1604220 0 0 0 BMRU
    eth5.95 1500 0 12746354 0 0 0 41143403 0 0 0 BMRU
    lo 16436 0 5450879 0 0 0 5450879 0 0 0 LRU



    [Expert@gate02:0]# fw ctl affinity -l -a
    eth1: CPU 0
    eth3: CPU 0
    eth4: CPU 0
    eth5: CPU 0
    fw_0: CPU 1
    fw_1: CPU 0
    dtlsd: CPU all
    pdpd: CPU all
    fwpushd: CPU all
    dtpsd: CPU all
    fwd: CPU all
    usrchkd: CPU all
    in.acapd: CPU all
    vpnd: CPU all
    in.geod: CPU all
    mpdaemon: CPU all
    in.asessiond: CPU all
    pepd: CPU all
    rad: CPU all
    cprid: CPU all
    cpd: CPU all
    The current license permits the use of CPUs 0, 1 only.

  15. #15
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by alienbaby View Post
    R77 has a new tool called cpview.

    Use it to get some visibility into the overall performance of the box. Number of connection setups a second (cps) etc.
    Feel free to post some of those numbers..
    i am attaching three files with output from cpview.

    Click image for larger version. 

Name:	cpview1.PNG 
Views:	441 
Size:	11.8 KB 
ID:	874Click image for larger version. 

Name:	cpview2.PNG 
Views:	428 
Size:	12.1 KB 
ID:	875Click image for larger version. 

Name:	cpview3.PNG 
Views:	487 
Size:	12.3 KB 
ID:	876

  16. #16
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    15

    Default Re: very slow intervaln communication via checkpoint

    OK now we are getting somewhere. Having only 2 cores (whether due to license limitation or hardware restriction) is kind of between a rock and a hard place when dealing with CoreXL. At the moment CPU 0 is handling all NIC IRQ processing as well as an INSPECT driver instance. CPU 1 is handling just an INSPECT driver instance. When dealing with 2 cores, there are two config possibilities:

    1) 2 INSPECT drivers (where you are now)
    2) 1 INSPECT driver on CPU 1, CPU 0 exclusively handling IRQs and no longer cache thrashing (CoreXL off)


    Which combination will yield the best performance? I've seen both scenarios provide the best performance in different situations so all I can say is try them in order and see if it helps your performance issue.

    On your netstat -ni output we are seeing some RX-DRPs on eth4 and eth5 which is expected due to CPU 0 getting overwhelmed. The overall RX-DRP rate is well below the 0.1% threshold where I generally consider increasing the ring buffer but I'd not advise doing that just yet. Trying scenario #2 above should help a lot.

    However what is not expected is that you are taking RX-ERRs on eth4 and eth5; typically this indicates a cabling or duplex mismatch issue. Please provide the output of the following for diagnosis:

    ethtool eth4
    ethtool eth5
    ethtool -S eth4
    ethtool -S eth5

    Once we can see what is causing those RX-ERRs that should help us quite a bit; even if you were to increase your #cores license I doubt you will get much throughput improvement until these lower level network issues are dealt with.
    Last edited by ShadowPeak.com; 2015-03-14 at 11:56.

  17. #17
    Join Date
    2014-10-03
    Posts
    30
    Rep Power
    0

    Default Re: very slow intervaln communication via checkpoint

    [Expert@gate02:0]# ethtool eth4
    Settings for eth4:
    Supported ports: [ FIBRE ]
    Supported link modes: 10000baseT/Full
    Supports auto-negotiation: No
    Advertised link modes: Not reported
    Advertised auto-negotiation: No
    Speed: 10000Mb/s
    Duplex: Full
    Port: FIBRE
    PHYAD: 1
    Transceiver: external
    Auto-negotiation: off
    Supports Wake-on: g
    Wake-on: d
    Link detected: yes

    ethtool eth5
    Settings for eth5:
    Supported ports: [ FIBRE ]
    Supported link modes: 10000baseT/Full
    Supports auto-negotiation: No
    Advertised link modes: Not reported
    Advertised auto-negotiation: No
    Speed: 10000Mb/s
    Duplex: Full
    Port: FIBRE
    PHYAD: 0
    Transceiver: external
    Auto-negotiation: off
    Supports Wake-on: g
    Wake-on: d
    Link detected: yes

    [Expert@gate02:0]# ethtool -S eth4
    NIC statistics:
    rx_packets: 3570774570
    tx_packets: 2915949935
    rx_bytes: 4219083514231
    tx_bytes: 1901543189566
    rx_errors: 3136
    tx_errors: 0
    rx_dropped: 2919
    tx_dropped: 0
    be_tx_rate: 31
    be_tx_reqs: 2915953338
    be_tx_wrbs: 1536939380
    be_tx_stops: 0
    be_tx_events: 931419237
    be_tx_compl: 2915953337
    be_ipv6_ext_hdr_tx_drop: 0
    rx_unicast_frames: 3561088958
    rx_multicast_frames: 19050
    rx_broadcast_frames: 9688514
    rx_crc_errors: 0
    rx_alignment_symbol_errors: 0
    rx_pause_frames: 0
    rx_control_frames: 0
    rx_in_range_errors: 0
    rx_out_range_errors: 0
    rx_frame_too_long: 0
    rx_address_match_errors: 201647740
    rx_vlan_mismatch: 0
    rx_dropped_too_small: 0
    rx_dropped_too_short: 0
    rx_dropped_header_too_small: 0
    rx_dropped_tcp_length: 10
    rx_dropped_runt: 0
    rx_fifo_overflow: 0
    rx_input_fifo_overflow: 0
    rx_ip_checksum_errs: 0
    rx_tcp_checksum_errs: 2675
    rx_udp_checksum_errs: 451
    rx_non_rss_packets: 0
    rx_ipv4_packets: 3569208308
    rx_ipv6_packets: 0
    rx_switched_unicast_packets: 0
    rx_switched_multicast_packets: 0
    rx_switched_broadcast_packets: 0
    tx_unicastframes: 2906466859
    tx_multicastframes: 9502099
    tx_broadcastframes: 9483048
    tx_pauseframes: 0
    tx_controlframes: 0
    rx_drops_no_pbuf: 0
    rx_drops_no_txpb: 0
    rx_drops_no_erx_descr: 0
    rx_drops_no_tpre_descr: 0
    rx_drops_too_many_frags: 0
    rx_drops_invalid_ring: 0
    forwarded_packets: 0
    rx_drops_mtu: 0
    port0_jabber_events: 0
    port1_jabber_events: 0
    eth_red_drops: 0
    be_on_die_temperature: 54
    rxq0: rx_bytes: 4219085152821
    rxq0: rx_pkts: 3570776481
    rxq0: rx_rate: 39
    rxq0: rx_polls: 866031992
    rxq0: rx_events: 0
    rxq0: rx_compl: 3570776481
    rxq0: rx_mcast_pkts: 3
    rxq0: rx_post_fail: 0
    rxq0: rx_drops_no_fragments: 2919

    [Expert@gate02:0]# ethtool -S eth5
    NIC statistics:
    rx_packets: 2598979660
    tx_packets: 3984117525
    rx_bytes: 1580718451159
    tx_bytes: 4919162265778
    rx_errors: 3
    tx_errors: 0
    rx_dropped: 1941
    tx_dropped: 0
    be_tx_rate: 36
    be_tx_reqs: 3984118983
    be_tx_wrbs: 3673270670
    be_tx_stops: 0
    be_tx_events: 971519846
    be_tx_compl: 3984118983
    be_ipv6_ext_hdr_tx_drop: 0
    rx_unicast_frames: 2551819467
    rx_multicast_frames: 19624
    rx_broadcast_frames: 47161554
    rx_crc_errors: 0
    rx_alignment_symbol_errors: 0
    rx_pause_frames: 0
    rx_control_frames: 0
    rx_in_range_errors: 0
    rx_out_range_errors: 0
    rx_frame_too_long: 0
    rx_address_match_errors: 164716490
    rx_vlan_mismatch: 0
    rx_dropped_too_small: 0
    rx_dropped_too_short: 0
    rx_dropped_header_too_small: 0
    rx_dropped_tcp_length: 0
    rx_dropped_runt: 0
    rx_fifo_overflow: 0
    rx_input_fifo_overflow: 0
    rx_ip_checksum_errs: 0
    rx_tcp_checksum_errs: 3
    rx_udp_checksum_errs: 0
    rx_non_rss_packets: 0
    rx_ipv4_packets: 2576723114
    rx_ipv6_packets: 574
    rx_switched_unicast_packets: 0
    rx_switched_multicast_packets: 0
    rx_switched_broadcast_packets: 0
    tx_unicastframes: 3925341143
    tx_multicastframes: 58795417
    tx_broadcastframes: 8926646
    tx_pauseframes: 0
    tx_controlframes: 0
    rx_drops_no_pbuf: 0
    rx_drops_no_txpb: 0
    rx_drops_no_erx_descr: 0
    rx_drops_no_tpre_descr: 0
    rx_drops_too_many_frags: 0
    rx_drops_invalid_ring: 0
    forwarded_packets: 0
    rx_drops_mtu: 0
    port0_jabber_events: 0
    port1_jabber_events: 0
    eth_red_drops: 0
    be_on_die_temperature: 54
    rxq0: rx_bytes: 1580719887202
    rxq0: rx_pkts: 2598981378
    rxq0: rx_rate: 21
    rxq0: rx_polls: 1128754613
    rxq0: rx_events: 0
    rxq0: rx_compl: 2598981378
    rxq0: rx_mcast_pkts: 576
    rxq0: rx_post_fail: 0
    rxq0: rx_drops_no_fragments: 1941

  18. #18
    Join Date
    2006-09-26
    Posts
    3,199
    Rep Power
    18

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by ShadowPeak.com View Post
    OK now we are getting somewhere. Having only 2 cores (whether due to license limitation or hardware restriction) is kind of between a rock and a hard place when dealing with CoreXL. At the moment CPU 0 is handling all NIC IRQ processing as well as an INSPECT driver instance. CPU 1 is handling just an INSPECT driver instance. When dealing with 2 cores, there are three config possibilities:

    1) 2 INSPECT drivers (where you are now)
    2) 1 INSPECT driver on CPU 1, CPU 0 exclusively handling IRQs and no longer cache thrashing
    3) CoreXL completely off (saves the overhead of CoreXL)

    Which combination will yield the best performance? I've seen all 3 scenarios provide the best performance in different situations so all I can say is try them in order and see if it helps your performance issue.

    On your netstat -ni output we are seeing some RX-DRPs on eth4 and eth5 which is expected due to CPU 0 getting overwhelmed. The overall RX-DRP rate is well below the 0.1% threshold where I generally consider increasing the ring buffer but I'd not advise doing that just yet. Trying scenario #2 above should help a lot.

    However what is not expected is that you are taking RX-ERRs on eth4 and eth5; typically this indicates a cabling or duplex mismatch issue. Please provide the output of the following for diagnosis:

    ethtool eth4
    ethtool eth5
    ethtool -S eth4
    ethtool -S eth5

    Once we can see what is causing those RX-ERRs that should help us quite a bit; even if you were to increase your #cores license I doubt you will get much throughput improvement until these lower level network issues are dealt with.
    FWIW, I am running GAIA R75.47 on a 203 license (2 cores) and CoreXL on an IBM x3650 (seven years old) with 10Gig Intel NIC DA-520 and I am getting about 2.5Gbit/seconds throughput using Iperf (probably due to 2.5Gbps bus throughput of the server) between VLAN interfaces.

    Therefore, I dont' think CoreXL is an issue. May be R77.10 is an issue.

  19. #19
    Join Date
    2009-04-30
    Location
    Colorado, USA
    Posts
    2,252
    Rep Power
    15

    Default Re: very slow intervaln communication via checkpoint

    Quote Originally Posted by Opera View Post


    [Expert@gate02:0]# ethtool -S eth4
    NIC statistics:
    rx_packets: 3570774570
    rx_errors: 3136
    rx_dropped: 2919
    rx_dropped_tcp_length: 10
    rx_tcp_checksum_errs: 2675
    rx_udp_checksum_errs: 451
    rxq0: rx_drops_no_fragments: 2919

    [Expert@gate02:0]# ethtool -S eth5
    NIC statistics:
    rx_packets: 2598979660
    rx_errors: 3
    rx_dropped: 1941
    rxq0: rx_drops_no_fragments: 1941
    Please provide output of "ethtool -k ethX" for both interfaces, if you have TCP Segment Offloading (TSO) turned on it will seriously impact performance when certain blades are enabled. You are suffering some RX ring buffer drops but they are way way below the 0.1% threshold where you should think about increasing ring buffer size.

    Edit: Also please provide output of "ethtool -i ethX". Based on some of the counter prefixes I have a sinking feeling there are Broadcom NIC cards present; you will be facing an uphill battle getting anything resembling decent performance out of those.
    Last edited by ShadowPeak.com; 2014-11-13 at 14:58.

  20. #20
    Join Date
    2005-11-25
    Location
    United States, Southeast
    Posts
    857
    Rep Power
    16

    Default Re: very slow intervaln communication via checkpoint

    1. What features do you have enabled on this gateway/cluster? (ie: IPS, App Control, URL filtering, DLP etc. )
    2. Is the gateway and/or cluster in 64-bit mode?
    3. In this GAIA?
    4. can you give the output from the command 'fw ctl affinity -l -v -r' ?
    5. Have you increased the ring buffers on these interfaces?
    6. What are the subnet masks on the affected interfaces? reference to high rx_address_match_errors counter.

Page 1 of 4 1234 LastLast

Similar Threads

  1. Replies: 5
    Last Post: 2014-06-27, 14:13
  2. ICA and SIC communication
    By Palanivel in forum Intermediate
    Replies: 3
    Last Post: 2013-09-10, 22:41
  3. IP addresses using to communication
    By ppawlo in forum Clustering (Security Gateway HA and ClusterXL)
    Replies: 4
    Last Post: 2010-06-10, 10:26
  4. SIC communication fail
    By d31jan in forum Check Point SecurePlatform (SPLAT)
    Replies: 3
    Last Post: 2008-08-03, 15:16
  5. PPTP Communication
    By roadrunner in forum Services (TCP, UDP, ICMP, etc.)
    Replies: 0
    Last Post: 2005-08-14, 12:09

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •