Data deduplication, as a compression method, has been widely used in most backup systems to improve bandwidth and space efficiency. As data exploded to be backed up, two main challenges in data deduplication are the C...Data deduplication, as a compression method, has been widely used in most backup systems to improve bandwidth and space efficiency. As data exploded to be backed up, two main challenges in data deduplication are the CPU-intensive chunking and hashing works and the I/0 intensive disk-index access latency. However, CPU-intensive works have been vastly parallelized and speeded up by multi-core and many-core processors; the I/0 latency is likely becoming the bottleneck in data deduplication. To alleviate the challenge of I/0 latency in multi-core systems, multi-threaded deduplication (Multi-Dedup) architecture was proposed. The main idea of Multi-Dedup was using parallel deduplication threads to hide the I/0 latency. A prefix based concurrent index was designed to maintain the internal consistency of the deduplication index with low synchronization overhead. On the other hand, a collisionless cache array was also designed to preserve locality and similarity within the parallel threads. In various real-world datasets experiments, Multi-Dedup achieves 3-5 times performance improvements incorporating with locality-based ChunkStash and local-similarity based SiLo methods. In addition, Multi-Dedup has dramatically decreased the synchronization overhead and achieves 1.5-2 times performance improvements comparing to traditional lock-based synchronization methods.展开更多
In order to improve the real-time performance of the real-time HLA(high level architecture) in the application of massive data communication volume,multi-thread processing was adopted,thread pool structure was introdu...In order to improve the real-time performance of the real-time HLA(high level architecture) in the application of massive data communication volume,multi-thread processing was adopted,thread pool structure was introduced into the system,different threads to handle corresponding message queues was utilized to respond different message requests.Furthermore,an allocation strategy of semi-complete deprivation of priority was adopted,which reduces thread switching cost and processing burden in the system,provided that the message requests with high priority can be responded in time,thus improves the system's overall performance.The design and experiment results indicate that the method proposed in this paper can improve the real-time performance of HLA in distributed system applications greatly.展开更多
基金Project(IRT0725)supported by the Changjiang Innovative Group of Ministry of Education,China
文摘Data deduplication, as a compression method, has been widely used in most backup systems to improve bandwidth and space efficiency. As data exploded to be backed up, two main challenges in data deduplication are the CPU-intensive chunking and hashing works and the I/0 intensive disk-index access latency. However, CPU-intensive works have been vastly parallelized and speeded up by multi-core and many-core processors; the I/0 latency is likely becoming the bottleneck in data deduplication. To alleviate the challenge of I/0 latency in multi-core systems, multi-threaded deduplication (Multi-Dedup) architecture was proposed. The main idea of Multi-Dedup was using parallel deduplication threads to hide the I/0 latency. A prefix based concurrent index was designed to maintain the internal consistency of the deduplication index with low synchronization overhead. On the other hand, a collisionless cache array was also designed to preserve locality and similarity within the parallel threads. In various real-world datasets experiments, Multi-Dedup achieves 3-5 times performance improvements incorporating with locality-based ChunkStash and local-similarity based SiLo methods. In addition, Multi-Dedup has dramatically decreased the synchronization overhead and achieves 1.5-2 times performance improvements comparing to traditional lock-based synchronization methods.
基金Sponsored by the National Defence SciTech Key Lab Fundation(51457040204BQ0102)
文摘In order to improve the real-time performance of the real-time HLA(high level architecture) in the application of massive data communication volume,multi-thread processing was adopted,thread pool structure was introduced into the system,different threads to handle corresponding message queues was utilized to respond different message requests.Furthermore,an allocation strategy of semi-complete deprivation of priority was adopted,which reduces thread switching cost and processing burden in the system,provided that the message requests with high priority can be responded in time,thus improves the system's overall performance.The design and experiment results indicate that the method proposed in this paper can improve the real-time performance of HLA in distributed system applications greatly.