Class SegmentNodeStoreConfiguration
- java.lang.Object
-
- org.silverpeas.core.jcr.impl.oak.configuration.NodeStoreConfiguration
-
- org.silverpeas.core.jcr.impl.oak.configuration.SegmentNodeStoreConfiguration
-
public class SegmentNodeStoreConfiguration extends NodeStoreConfiguration
Configuration parameters of a segment storage. A segment storage, unlike the document storage, can be accessed by only a single entry point. It is dedicated to the standalone application requiring maximal performance.
Oak Segment Tar is an Oak storage backend that stores content as various types of records within larger segments. Segments themselves are collected within tar files along with further auxiliary information. A journal is used to track the latest state of the repository. It is based on the following key principles:
- Immutability. Segments are immutable, which makes is easy to cache frequently accessed segments. This also makes it less likely for programming or system errors to cause repository inconsistencies, and simplifies features like backups or master-slave clustering.
- Compactness. The formatting of records is optimized for size to reduce IO costs and to fit as much content in caches as possible.
- Locality. Segments are written so that related records, like a node and its immediate children, usually end up stored in the same segment. This makes tree traversals very fast and avoids most cache misses for typical clients that access more than one related node per session.
The content tree and all its revisions are stored in a collection of immutable records within segments. Each segment is identified by a UUID and typically contains a continuous subset of the content tree, for example a node with its properties and closest child nodes. Some segments might also be used to store commonly occurring property values or other shared data. Segments can be up to 256KiB in size. See Segments and records for a detailed description of the segments and records.
Segments are collectively stored in tar files and check-summed to ensure their integrity. Tar files also contain an index of the tar segments, the graph of segment references of all segments it contains and an index of all external binaries referenced from the segments in the tar file. See Structure of TAR files for details.
The journal is a special, atomically updated file that records the state of the repository as a sequence of references to successive root node records. For crash resiliency the journal is always only updated with a new reference once the referenced record has been flushed to disk. The most recent root node reference stored in the journal is used as the starting point for garbage collection. All content currently visible to clients must be accessible through that reference.
- Author:
- mmoquillon
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
SegmentNodeStoreConfiguration.DefaultValues
Default values of the different segment node storage configuration parameters.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description int
getBackupFileAgeThreshold()
The minimum number of days old a backup file MUST be aged to be deleted during the compaction process.String
getCompactionCRON()
Gets the CRON to schedule ONLINE compaction process.int
getCompactionForceTimeout()
Gets the amount of time the online compaction process is allowed to exclusively lock the store, in seconds.int
getCompactionMemoryThreshold()
Gets the percentage of heap memory that should always be free while compaction runs.long
getCompactionProgressLog()
Enables compaction progress logging at each set of compacted nodes.int
getCompactionRetryCount()
Gets the number of commit attempts the online compaction process should try before giving up.long
getCompactionSizeDeltaEstimation()
Gets the increase in size of the Node Store (in bytes) since the last successful compaction that will trigger another execution of the compaction phase.int
getNodeDeduplicationCacheSize()
Gets the maximum size of the node deduplication cache in number of items.int
getSegmentCacheSize()
Gets the maximum size of the segment cache in MB.String
getStoragePath()
Gets the path on the file system of the directory into which the repository content will be stored.int
getStringCacheSize()
Gets the maximum size of the strings cache in MB.int
getStringDeduplicationCacheSize()
Gets the maximum size of the string deduplication cache in number of items.int
getTarMaxSize()
Gets the maximum size of TAR files on disk in MB.int
getTemplateCacheSize()
Gets the maximum size of the template cache in MB.int
getTemplateDeduplicationCacheSize()
Gets the maximum size of the template deduplication cache in number of items.boolean
isCompactionDisableEstimation()
Disables the estimation phase of the online compaction process.boolean
isPauseCompaction()
Determines if online compaction should be executed.-
Methods inherited from class org.silverpeas.core.jcr.impl.oak.configuration.NodeStoreConfiguration
getBoolean, getInteger, getList, getLong, getString
-
-
-
-
Method Detail
-
getStoragePath
public String getStoragePath()
Gets the path on the file system of the directory into which the repository content will be stored. In this directory, the content will be split into several segments, each of them being a TAR archive. By default, if not set, the Segment Store persists its data into the subdirectorysegmentstore
of the JCR home folder. This property allows the user to either indicate another name of the subdirectory or simply another absolute path.- Returns:
- the path of the directory containing the repository content on the filesystem. By default the relarive path "segmentstore" to the JCR home directory.
-
getTarMaxSize
public int getTarMaxSize()
Gets the maximum size of TAR files on disk in MB. The data are stored as various types of records within larger segments. Segments themselves are collected within tar files along with further auxiliary information.- Returns:
- the maximum size of the tar files in MB.
-
getSegmentCacheSize
public int getSegmentCacheSize()
Gets the maximum size of the segment cache in MB. The segment cache keeps a subset of the segments in memory and avoids performing I/O operations when those segments are used.- Returns:
- the maximum size of the segment cache in MB
-
getStringCacheSize
public int getStringCacheSize()
Gets the maximum size of the strings cache in MB. The string cache keeps a subset of the string records in memory and avoids performing I/O operations when those strings are used.- Returns:
- the maximum size of the strings cache in MB
-
getTemplateCacheSize
public int getTemplateCacheSize()
Gets the maximum size of the template cache in MB. The template cache keeps a subset of the template records in memory and avoids performing I/O operations when those templates are used.- Returns:
- the maximum size of the template cache in MB
-
getStringDeduplicationCacheSize
public int getStringDeduplicationCacheSize()
Gets the maximum size of the string deduplication cache in number of items. The string deduplication cache tracks string records across different GC generations. It avoids duplicating a string record to the current GC generation if it was already duplicated in the past.- Returns:
- the maximum size of the string deduplication cache in number of items
-
getTemplateDeduplicationCacheSize
public int getTemplateDeduplicationCacheSize()
Gets the maximum size of the template deduplication cache in number of items. The template deduplication cache tracks template records across different GC generations. It avoids duplicating a template record to the current GC generation if it was already duplicated in the past.- Returns:
- the maximum size of the template deduplication cache in number of items
-
getNodeDeduplicationCacheSize
public int getNodeDeduplicationCacheSize()
Gets the maximum size of the node deduplication cache in number of items. The node deduplication cache tracks node records across different GC generations. It avoids duplicating a node record to the current generation if it was already duplicated in the past.- Returns:
- the maximum size of the node deduplication cache in number of items
-
isPauseCompaction
public boolean isPauseCompaction()
Determines if online compaction should be executed. If this property is true, both the estimation and compaction phases of the online compaction process are not executed.- Returns:
- true if online compaction should be executed. False otherwise.
-
getCompactionCRON
public String getCompactionCRON()
Gets the CRON to schedule ONLINE compaction process. The backup file deletion is also taken in charge ifisPauseCompaction()
return false. Empty value means no schedule of compaction process.- Returns:
- a string representing a CRON. Empty to deactivate the scheduling.
-
getBackupFileAgeThreshold
public int getBackupFileAgeThreshold()
The minimum number of days old a backup file MUST be aged to be deleted during the compaction process.-1 value means that the backup file deletion process MUST not be performed
0 value means that all backup file are taken into account by deletion process, whatever their age
- Returns:
- an integer representing a number of day.
-
getCompactionRetryCount
public int getCompactionRetryCount()
Gets the number of commit attempts the online compaction process should try before giving up. This property determines how many times the online compaction process should try to merge the compacted repository state with the user-generated state produced by commits executed concurrently during compaction.- Returns:
- the number of commit attempts of compaction.
-
getCompactionForceTimeout
public int getCompactionForceTimeout()
Gets the amount of time the online compaction process is allowed to exclusively lock the store, in seconds. If this property is set to a positive value, if the compaction process fails to commit the compacted state concurrently with other commits, it will acquire an exclusive lock on the Node Store. The exclusive lock prevents other commits for completion, giving the compaction process a possibility to commit the compacted state. This property determines how long the compaction process is allowed to use the Node Store in exclusive mode. If this property is set to zero or to a negative value, the compaction process will not acquire an exclusive lock on the Node Store and will just give up if too many concurrent commits are detected.- Returns:
- the amount of time in seconds the compaction is allowed to lock the store.
-
getCompactionSizeDeltaEstimation
public long getCompactionSizeDeltaEstimation()
Gets the increase in size of the Node Store (in bytes) since the last successful compaction that will trigger another execution of the compaction phase.- Returns:
- the delta in size of the node store (in bytes) between two compactions.
-
isCompactionDisableEstimation
public boolean isCompactionDisableEstimation()
Disables the estimation phase of the online compaction process. If this property is set to true, the estimation phase of the compaction process will never run, and compaction will always be triggered for any amount of garbage in the Node Store.- Returns:
- true if the estimation phase for compaction isn't performed. False otherwise.
-
getCompactionMemoryThreshold
public int getCompactionMemoryThreshold()
Gets the percentage of heap memory that should always be free while compaction runs. If the available heap memory falls below the specified percentage, compaction will not be started, or it will be aborted if it is already running.- Returns:
- the threshold of heap memory in percentage to keep free for compaction.
-
getCompactionProgressLog
public long getCompactionProgressLog()
Enables compaction progress logging at each set of compacted nodes. A value of -1 disables the log.- Returns:
- the number of compacted nodes for logging the compaction progress. -1 means the progress logging is disabled.
-
-