Search

Tag: LSFS

View:
Boris Yurchenko

Dedupe or Not Dedupe: That is the Question!

Today I will deal with data deduplication analysis. Data deduplication is a technique that helps to avoid storing repeated identical data blocks. Basically, during the deduplication process, unique data blocks, or byte patterns, are identified and written to the storage array after being analyzed. While such analysis is a continuous process, other data blocks are processed and compared to the initially stored patterns. If a match is found, instead of storing a data block, the system stores a little reference to the original data block. In case of small environments, this is not crucial mostly, yet for those with dozens or hundreds of VMs, the same patterns can be met numerous times. Thus, due to the advanced algorithms used, data deduplication allows storing more information on the same physical storage volume compared to traditional data storage methods. This can be achieved in several ways, one of which is StarWind LSFS (Log Structured File System), which offers inline deduplication of data on LSFS-powered virtual storage devices.

Didier Van Hoye

Using a VEEAM off-host backup proxy server for backing up Windows Server 2016 Hyper-V Hosts

Many years ago, I wrote a white paper on how to configure a VEEAM Off-host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster that uses a hardware VSS provider with VEEAM Backup & Replication 7.0.  It has aged well and you can still use it as a guide to set it all up. But in this article, I revisit the use of a hardware VSS provider dedicated specifically to some changes in Windows Server 2016 and its use by Veeam Backup & Replication v9.5 or later. The information here is valid for any good hardware VSS provider like the one VSAN from StarWind provides (see Do I need StarWind Hardware VSS provider?)

Alex Bykovskyi

Storage HA on the Cheap: Fixing Synology DiskStation flaky Performance with StarWind Free. Part 2 (Log-Structured File System)

In this article, we are going to continue testing Synology DS916+ with VSAN from StarWind. Our main goal today is to improve the performance of Synology boxes specifically on random patterns. Randoms were chosen for a reason. SQL and OLTP workloads tend to cause huge stress, especially, to spindle arrays, generating a heavily randomized I/O. Patterns we are choosing for today’s benchmark are common for such environments. There are different approaches, which can handle these workload types, such as caching and tiering. Our approach is to build environment with StarWind Log-Structured File System. LSFS was created exactly for this type of environments to improve the performance. We will compare the results we receive to the ones from Part 1 of our research.

Anton Kolomyeytsev

ReFS: Performance

ReFS (Resilient File System – is a Microsoft file system, which ensures data integrity by means of resiliency to corruption (irrespective of software or hardware failures), increases data availability and scales to large data sets across various workloads. Its data protection feature is represented by the FileIntegrity option, which is responsible for file scanning and repair processes.

Anton Kolomyeytsev

ReFS: Log-Structured

Here is a part of a series about Microsoft Resilient File System, first introduced in Windows Server 2012. It shows an experiment, conducted by StarWind engineers, dedicated to seeing the ReFS in action. This part is mostly about the FileIntegrity feature in the file system, its theoretical application and practical performance under real virtualization workload. The feature is responsible for data protection in ReFS, basically the reason for “resilient” in its name. It’s goal is avoidance of the common errors that typically lead to data loss. Theoretically, ReFS can detect and correct any data corruption without disturbing the user or disrupting production process.

Anton Kolomyeytsev

Log-Structured File Systems: Overview

Log-Structured File System is obviously effective, but not for everyone. As the “benefits vs. drawbacks” list shows, Log-Structuring is oriented on virtualization workload with lots of random writes, where it performs like a marvel. It won’t work out as a common file system for everyday tasks. Check out this overview and see what LSFS is all about.