24.5 Async highspeed/ network replication (Solarish and Linux)
- Async Replication between appliances (near realtime) with remote appliance management and monitoring
- Based on ZFS send/ receive and snapshots
- After an initial full transfer, only modified datablocks are transferred
- High speed transport via (buffered on Solaris) netcat
- (unencrypted transfer, intended for secure LANs)
- Replication is always pull data. You only need a key on the target server, not on sources
How to setup
- You need a licence key on a target server only (you can request evaluation keys)
- Register the key: copy/paste the whole key-line into menu extension-register, example:
replicate h:server2 - 20.06.2012::VcqmhqsmVsdcnetqsmVsTTDVsK
- Group your appliances with menu extension -appliance group.
Klick on ++ add to add members to the group
- Create a replication job with menu Jobs - replicate - create replication job
- Start the job manually or timer based
- After the initial transfer (can last some time), all following transfers are copying only modifies blocks
- You can setup transfers down to every minute (near realtime)
- If one of your server is on an unsecure network like Internet: buld a secure VPN tunnel between appliances
- If you use a firewall with deep inspection: This may block netcat, set a firewall rule to allow port 81 and
replication ports
Use it for
Highspeed inhouse replication for backup or near realtime failover/redundancy on secure networks
External replication over VPN links with fixed ip‘s and a common DNS server (or manual host entries)
H
ow replication works
- On initial run, it creates a source snap jobid...nr_1 and transfers the complete ZFS dataset over a netcat
highspeed connection. When the transfer is completed successfully, a target snap jobid.._nr_1 is created.
A replication can be recursive (ex whole pool with all filesystems). Use this for a pool transfer only.
- The next replication run is incremental and based on this snap-pair.
A new source snap jobid.._nr_2 with modified datablocks is created and transfered.
When the transfer is completed successfully, a new target snap jobid.._nr_2 is created.
And so on. Only modified datablocks are transfered to provide near realtime syncs when run every
few minutes. If a replication fails for whatever reason, you have a higher source than target snapnumber.
This does not matter. The source snap is recreated on next incremental run.
Incremental replications can be recursive. But you should avoid that as zfs send does not care of newly
created or modified filesystems. In such a case, an incremental recursive replication run fails with an error.
Difference to other sync methods like rsync
Both rsync and ZFS replication are methods to syncronise data on two servers or filesystems/folders.
While rsync can sync any folders, ZFS replication can sync ZFS filesystems. The main difference is that rsync
scans all source and target folders on a run and transfers modified files based on a comparison. Rsync must
therefor travers the whole data structures what makes it really slow especially with many small files and rsync
cannot sync open files. The pro is that you can run it any time without any special restrictions.
Содержание ZFS Storage
Страница 8: ...3 1 ZFS Configurations...
Страница 45: ...Example Map Chenbro 50 x 3 5 Bay...