Using ZFS for File System Replication
1335
/usr/lib/libc/libc_hwcap1.so.1
4.6G 3.7G 886M 82% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 1.4G 40K 1.4G 1% /tmp
swap 1.4G 28K 1.4G 1% /var/run
/dev/dsk/c0d0s7 26G 913M 25G 4% /export/home
scratchpool 16G 24K 16G 1% /scratchpool
The MySQL data is stored in a directory on
/scratchpool
. To help demonstrate some of the basic
replication functionality, there are also other items stored in
/scratchpool
as well:
total 17
drwxr-xr-x 31 root bin 50 Jul 21 07:32 DTT/
drwxr-xr-x 4 root bin 5 Jul 21 07:32 SUNWmlib/
drwxr-xr-x 14 root sys 16 Nov 5 09:56 SUNWspro/
drwxrwxrwx 19 1000 1000 40 Nov 6 19:16 emacs-22.1/
To create a snapshot of the file system, you use
zfs snapshot
, specifying the pool and the snapshot
name:
root-shell> zfs snapshot scratchpool@snap1
To list the snapshots already taken:
root-shell> zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
scratchpool@snap1 0 - 24.5K -
scratchpool@snap2 0 - 24.5K -
The snapshots themselves are stored within the file system metadata, and the space required to keep
them varies as time goes on because of the way the snapshots are created. The initial creation of a
snapshot is very quick, because instead of taking an entire copy of the data and metadata required to
hold the entire snapshot, ZFS records only the point in time and metadata of when the snapshot was
created.
As more changes to the original file system are made, the size of the snapshot increases because
more space is required to keep the record of the old blocks. If you create lots of snapshots, say one
per day, and then delete the snapshots from earlier in the week, the size of the newer snapshots might
also increase, as the changes that make up the newer state have to be included in the more recent
snapshots, rather than being spread over the seven snapshots that make up the week.
You cannot directly back up the snapshots because they exist within the file system metadata rather
than as regular files. To get the snapshot into a format that you can copy to another file system, tape,
and so on, you use the
zfs send
command to create a stream version of the snapshot.
For example, to write the snapshot out to a file:
root-shell> zfs send scratchpool@snap1 >/backup/scratchpool-snap1
Or tape:
root-shell> zfs send scratchpool@snap1 >/dev/rmt/0
You can also write out the incremental changes between two snapshots using
zfs send
:
root-shell> zfs send scratchpool@snap1 scratchpool@snap2 >/backup/scratchpool-changes
To recover a snapshot, you use
zfs recv
, which applies the snapshot information either to a new file
system, or to an existing one.
15.5.1. Using ZFS for File System Replication
Because
zfs send
and
zfs recv
use streams to exchange data, you can use them to replicate
information from one system to another by combining
zfs send
,
ssh
, and
zfs recv
.
Содержание 5.0
Страница 1: ...MySQL 5 0 Reference Manual ...
Страница 18: ...xviii ...
Страница 60: ...40 ...
Страница 396: ...376 ...
Страница 578: ...558 ...
Страница 636: ...616 ...
Страница 844: ...824 ...
Страница 1234: ...1214 ...
Страница 1426: ...MySQL Proxy Scripting 1406 The following diagram shows an overview of the classes exposed by MySQL Proxy ...
Страница 1427: ...MySQL Proxy Scripting 1407 ...
Страница 1734: ...1714 ...
Страница 1752: ...1732 ...
Страница 1783: ...Configuring Connector ODBC 1763 ...
Страница 1793: ...Connector ODBC Examples 1773 ...
Страница 1839: ...Connector Net Installation 1819 2 You must choose the type of installation to perform ...
Страница 1842: ...Connector Net Installation 1822 5 Once the installation has been completed click Finish to exit the installer ...
Страница 1864: ...Connector Net Visual Studio Integration 1844 Figure 20 24 Debug Stepping Figure 20 25 Function Stepping 1 of 2 ...
Страница 2850: ...2830 ...
Страница 2854: ...2834 ...
Страница 2928: ...2908 ...
Страница 3000: ...2980 ...
Страница 3122: ...3102 ...
Страница 3126: ...3106 ...
Страница 3174: ...3154 ...
Страница 3232: ...3212 ...