Glossary
317
COW
Copy-On Write. In a Copy-On-Write (COW) implementation, snapshots protect the old
data by copying it to a separate snapshot space when there is a new write that needs to
overwrite it.
After the old data has been copied to the new location, the data in the old location is
overwritten with the new data. Until the next snapshot is taken, all subsequent writes that
arrive for the same location overwrite the new data, without touching the old data in the
snapshot space. In copy-on-write snapshots, therefore, every first write that tries to
overwrite an old one also spawns an additional read (of the old data) and a write (of the
old data to the snapshot space).
The snapshot space may be organized in different ways depending on the implementation.
Regardless of the implementation, however, it is necessary to maintain a table that keeps
track of the logical addresses of data that have been copied to the snapshot space. Often,
this table is just a hash (by logical address) of the location of old data versus that of new
data. This table needs to be consulted when a snapshot is mounted and read, and
preferably should be stored in the system’s main memory to allow for rapid reads from
the snapshot.
Since the quantity of old data that needs to be protected may be arbitrarily large, these
tables can grow large and unwieldy if they are maintained on a per-sector level. For this
reason, most implementations choose not to track data on a per-sector level; instead, a
larger granularity is chosen, such as 64kB. In this way, data is always copied out in 64kB
portions, regardless of the size of the new write request. Although vendors may refer to
this granularity variously as the chunk size, physical extent, snap granularity, copy size,
and so on, the implementation is the same.
DAS
Direct Attached Storage. As the name implies, this is basically any storage device that is
directly attached to the server. DAS can be a JBOD (Just-a-Bunch Of Disks) unit, hard
disk drives internal to the server, external USB hard disk drives, and so on. Suitable for
small installations. Limitations includes, inefficient provisioning, storage manage through
the server, adding capacity usually means adding another server (resulting in duplicated
overhead information), and scaling requires downtime.
Data
Distinct pieces of information usually formatted in a special way.
Summary of Contents for ManageTrends 2.7
Page 18: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 xviii...
Page 24: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 6...
Page 33: ...Chapter Two Chassis Set Up 15 Accessing the Inside of the System...
Page 60: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 42...
Page 64: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 46...
Page 70: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 52...
Page 100: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 82...
Page 106: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 88 Control Panel...
Page 236: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 218...
Page 256: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 238...
Page 277: ...Appendix E Replication Overview 259 Snap Assisted Replication Navigating with ManageTrends...
Page 281: ...Appendix E Replication Overview 263 Replication SAR view Primary Box SAR view Secondary Box...
Page 285: ...Appendix E Replication Overview 267 After Failover Operation in Secondary Box...
Page 300: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 282...
Page 308: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 290...
Page 330: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 312...
Page 356: ...StorTrends 1300 User s Guide StorTrends iTX version 2 7 338...