Sync is kind of a pain. People want to use a cable directly between the firewalls, but this can delay or even prevent proper automatic failover. When one firewall reboots, the other firewall sees its sync interface go down and has to probe its other interfaces to see whether it is the broken one. If the link probing fails on any interface, the member may refuse to take over.
Using a single switch also doesn't work very well, since when you reboot the switch (say, to apply updates), the members both see their sync interfaces go down, so they both start probing interfaces. This can cause active contention or it can cause both members to go down, depending on the probe results.
Attempting to solve these problems, I arrived at this config:
Code:
add bonding group 0
add bonding group 0 interface eth0
add bonding group 0 interface eth1
set bonding group 0 mode round-robin
I connected the interfaces to a pair of dumb switches (but did not connect the switches to each other) and set up my cluster using bond0 for sync. It seemed to work perfectly. I rebooted the switches one at a time, and I never appeared to lose sync.
This seems pretty much ideal. No switch-side need for special multi-chassis link aggregation systems like Cisco's vPC. You can reboot either switch for updates without breaking communications between the firewall members. You can reboot either firewall member without causing loss of link on the other.
Am I missing anything? Does anyone know of a reason we shouldn't use a bond like this for sync?
Bookmarks