25. Appliance Z-RAID with SSF (Storage and Service Failover)
based on Comstar iSCSI targets/ initiators (under development, not ready)
Z-RAID vs RAID-Z
ZFS and Raid-Z is a perfect solution when you want data protection against bitrot with end to end data
checksums and a crash resistent CopyOnWrite filesystem with snaps and versioning. ZFS Raid-Z protects
against disk failures. ZFS replication adds a ultrafast method for async backups even when files are open.
ZFS and Z-RAID is the next step as it adds realtime sync between independent Storage Appliances where ZFS
protects against a whole Appliance failure. It also adds Availabilty for NFS and SMB services as it allows a full
Storage Server failure with a manual or automatic Pool and NFS/SMB service failover with a shared virtual ip.
RAID-Z
Traditionally you build a ZFS pool from RAID-Z or Raid-1 vdevs from disks. To be protected against a disaster like
a fire or flash, you do backups and snaps for daily access of previous versions to access deleted or modified files.
In case of a disaster, you can restore data and re-establish services based the last backup state.
Main Problem: there is a delay between your last data state and the backup state. You can reduce the gap with
ZFS Async Replication but the problem remains that backup is never up to date. An additional critical point are
open files. As ZFS Replication is based on snaps, the last state of a replication is like a sudden poweroff what
means that files (or VMs in a virtualisation environment) may be in a corrupted state on the backup.
Another problem is time to re-establish services like NFS or SMB on a server crash. If you need to be back online
in a short time, you use a second backup system that is able and prepared to takeover services based on the last
backup state. As on Solarish systems, NFS and SMB are integrated in the OS/Kernel/ZFS with the Windows
security identifier SID as an extended ZFS attribute (Solarish SMB), this is really troublefree. Even in a Windows
AD environment, you only need to import a pool, takeover the ip of the former server, and your clients can
access their data with all AD permission settings intact without any additional settings to care about.
Z-RAID SSF
But what about a solution that allows all data on the main server and the backup server to be really in sync?
This would mean that you do not use async technologies like backup or replication but sync technologies like
mirroring or Raid-Z between your storagervers where ZFS protecs against a whole server or pool failure. This is
Z-RAID SSF where you build a ZFS Pool not from disks but from independent storageservers each with a local
datapool and a manual or automatic Z-POOL and Service Failover in case of problems with the primary server.
What is required to build a ZFS Z-
RAID SSF
over Appliances on a network?
First you need blockdevices as a ZFS pool is build on blockdevices. A disk is a blockdevice but you can also use
files (like on Lofi encrypted ZFS pools) and FC/iSCSI LUNs and the last option is the solution. You know Comstar
and iSCSI as a proven technology to create Targets that you may have already used for clients like ESXi, OSX or
Windows where you use Comstar to offer LUNs over the network like a local disk.
What about using these network LUNs not for other systems but for ZFS itself? You only need a software called
Initiator that allows to connect to network LUNs and use them like local disks. If you have not known, this is
included in Solarish Comstar as well. When you enable the Initiator with any Target Discovery, it will detect all
LUNs from selected Targets and offer them like local disks.
Comstar and iSCSI is proven technology, so the question is, why we have not used this in the past?
The answer is quite simple: Not fast enough on an average 1G network and you need some iSCSI knowledge to
set it up. The additional Layer „Disks -> ZFS Pool -> Target -> network -> Initiator -> ZFS Pool“ can also slow
down compared to a local Pool where the datapath is „Disks -> ZFS Pool“.
Содержание ZFS Storage
Страница 8: ...3 1 ZFS Configurations...
Страница 45: ...Example Map Chenbro 50 x 3 5 Bay...