The DataStore 4.4
A data store is a device type used for writing the savesets directly on one or several configured storage locations – into the file system. Data store configuration consists of specifying data store capacity and watermarks: HWM and LWM.
The default data store type is Path. This data store type is relevant for configuring all storage locations, except for SEP sesam additional archiving and deduplication module that uses a dedicated SEP Si3 deduplication store. As of SEP sesam version 4.4.3, SEP EasyArchive data store and FDS deduplication store are no longer supported.
SEP sesam uses a data store instead of a conventional media pool to define storage repository. The data is still primarily backed up to a media pool, however a data store is used underneath to save data to dynamically managed data areas, including disk backups.
Depending on the SEP sesam version, the data store window might be slightly different, but the configuration remains the same across various SEP sesam versions. As of version 4.4.3 Grolar, a few additional options are available: new data store type NetApp Snap Store, S3 credentials, Si3 State, Si3 Repair Area. For details, see How to manage a DataStore .
Data store concept
The difference beween a conventional media pool, typically used for backing up directly to tapes, and a data store is in defining the storage space directory directly in the drive by using the operating system's partition functions. Therefore the data store space is managed at partition level.
Another difference is that when a data store becomes full during backup, the saveset will not be split up. Instead, the backup will be aborted. You should consider this when specifying the data store capacity. For details, see Data store calculation recommendations below.
|Only one data store should be used for each hard disk partition. Even though several data stores can be set up on one partition, you are advised against such configuration as each data store reads the values of the other partitions when checking partition allocation. Consequently, such coexisting data stores obstruct each other.|
As shown in the illustration below, a media pool still points to a drive group. However, there is now an additional level of one or more data stores between the media pool and the drives. The connection between a data store and the related drive is static.
Data store capacity
Data store configuration consists of specifying the data store capacity and watermarks. The data store capacity is space reserved for the SEP sesam data store and, optionally, non-SEP sesam data that might be stored on the same volume as the SEP sesam data store. If the data store is shared with non-SEP sesam data, you will have to obtain a special SEP sesam storage license.
When specifying the capacity value, a dedicated partition must have enough free space. From SEP sesam version 4.4 onwards, the method for calculating the required disk space is:
space occupied by Sesam + free disk space = DS capacity
where DS capacity is the configured capacity value in SEP sesam's data store configuration. For examples on calculating a data store capacity, see How do I calculate the data store capacity.
More than one data store is required in a media pool only if the media pool uses data from several disk partitions, in which case all the drives of a media pool's data stores must be part of the same drive group. This ensures that the SEP sesam queue manager distributes the backups in this media pool to all data stores (balancing). For details on drive groups, see Drives.
Watermarks and purge
A watermark is a parameter used for data store configuration (managing disk space usage).
- A high watermark (HWM) is the upper value for the used disk space on the data store. It defines the available storage space for backup and migration. When this value is reached, a data store purge process is started for all (EOL-free) savesets. For details, see Managing EOL.
- A low watermark (LWM) is the lower value for the used disk space on the Path data store and NetApp Snap Store. It defines how much storage space is available for savesets with expired EOL. If LWM is set to 0, the EOL-free savesets are removed from the data store at the next purge. If both watermarks, HWM and LWM are used, all EOL-free savesets will only be purged, when the HWM is exceeded.
The oldest free savesets are deleted first. Purging is done until the low watermark is reached. When setting the HWM parameter, you should ensure that sufficient space is allocated between data store capacity and HWM for a complete full backup. For details, see Data store calculation recommendations below.
|In previous versions of SEP sesam (≤ 188.8.131.52 Tigon), if HWM was set and exceeded, backups could no longer be started while running backups were allowed to finish. Purging is done until the low watermark is reached (if set). This behaviour has changed with SEP sesam v. ≥ 184.108.40.206 Tigon V2; if HWM is set, exceeding it will only issue an information message but will no longer prevent backups to be started.|
Events that trigger the data store purge are:
- Sharing the drive of the data store after a backup
- Starting purge manually in GUI
The manual execution of the data store purge process deletes obsolete (EOL-free) savesets. Another option is to clean up a data store (≥ 4.4.3. Tigon V2), as described below.
Clean up orphaned savesets
As of 4.4.3. Tigon V2, you can manually remove orphaned savesets from the data stores by using the new Clean up option in the Data Stores content pane, thus releasing the space that might be occupied by orphaned savesets. This is useful in cases when a data store seems to be inaccessible, its storage space is occupied, or SEP sesam space check shows non-sesam data.
- In the Main selection -> Components, click Data stores to display the data store contents frame.
- In the content pane menu, click Clean up and select the data store (and the relevant drive number) for which you want to free up space by removing orphaned savesets.
Click Clean Up.
- You can check the status of the clean up action in the data store properties under the Actions tab.
Data store calculation recommendations
- Data store volume sizing and capacity usage should be managed at partition level. It is recommended that only SEP sesam data is stored on the respective volume.
- The data store should be at least three times (3x) the maximum full backup size of the planned backup to allow the watermarks to work automatically and dynamically.
- It may be necessary to scale up the data store to beyond 3x the maximum size when a longer hold-back time is stipulated or very big savesets are to be stored. To do this, sufficient space should be allocated:
- between capacity and HWM for a complete full backup
- between HWM and LWM for another full backup
- inside the LWM area for a third full backup
- A virtual drive can handle up to 124 simultaneous backups (channels) for storing data to a SEP sesam data store (depending on the SEP sesam version). Only when it becomes necessary to back up more than 124 channels (SEP sesam server Premium Edition) should another drive be added to the data store.
- Because a backup is terminated rather than split up when a data store becomes full, the correlation between the size of the data store and the size of the biggest backup task must be determined carefully. Take the example of a data store defined at 3 TB, whereas the biggest saveset is 2 TB. With an EOL=1, a data store three times (3x) the maximum full backup size may be too small to allow the watermarks to work properly. In such a case, it is recommended to scale up the data store size.
- When a media pool requires more than one data store, all the data stores must be connected to the same SEP sesam Device Server (IP host). SEP sesam does not currently support network-distributed data stores being served by a single media pool.
- When using more than one data store, only negative OR positive values can be used for capacity and HWM/LWM. SEP sesam does not support the use of negative AND positive values at the same time.
Data store properties
The following are the properties of a data store:
- The size (in GB) of the partition available for backups.
- The size (in GB) of the occupied data store space by SEP sesam.
- High Water Mark
- HWM treshold – the upper value (in GB/GiB) for the used disk space on the data store. When this value is reached, a data store purge process is started for all EOL-free savesets thus freeing up data store capacity.
- Low Water Mark
- LWM treshold – the value (in GB/GiB) for savesets with expired EOL. If LWM is set to 0 (default), the EOL-free savesets are removed from the data store at the next purge. For the deduplication store the LWM is not editable.
- Maximum available space (in GB) on the partition as reported by the operating system.
- Total used space (in GB) on the partition.
- Available disk space (in GB) for SEP sesam.
Selecting a data store and clicking the tab Savesets opens a list of all savesets with their details. You can change EOL of individual saveset, adjust backup-related EOL, and lock or unlock individual saveset.
- Saveset EOL
- The column Saveset EOL enables you to change EOL for each individual saveset, stored on the respective data store. You can extend or reduce its retention time. If the adjusted saveset is a part of a backup chain, the whole chain is affected. See EOL-related backup chain dependencies.
- Backup EOL
- The column Backup EOL enables you to adjust EOL for all savesets containing the same data. This backup-related EOL is applied to all savesets with the same data, including migrated and replicated savesets.
For example, adjusting EOL of a migrated saveset from 2.12.2016 to 12.12.2017 results in changed EOL for all related backup data, i.e., original backup, replicated backup, as well as for all backups in a backup chain, if a saveset with adjusted EOL is a part of it.
|By default, SEP sesam deletes the failed backups after 3 days automatically to release the storage space. If you want to keep such backups for a longer time, you may manually extend the backup EOL (expiration date) of a particular saveset. For details, see Manually extending EOL.|
- EOL-related backup chain dependencies
- You can extend or reduce the retention period for an individual saveset or backup-related saveset, as described above. Keep in mind that increasing EOL of a DIFF or INCR saveset will result in increased EOL of all dependent backups (FULL and other DIFF and INCR) in order to retain the backup data. This keeps the backup chain readily available for restore. On the other hand, decreasing EOL of a DIFF or INCR saveset to a date in the past will result in a warning message prompting you to confirm your decision to set the whole backup chain to already passed time. By setting EOL for DIFF or INCR savesets to expired time results in purging and overwriting the complete backup chain.
|Each saveset can be deleted when the following conditions are met: