Search

Tag: performance

View:
Dmytro Khomenko

Storage Tiering – the best of both worlds

Before the time when SSDs took their irreplaceable place in the modern datacenter, there was a time of slow, unreliable, fragile, and vacuum filled spinning rust drives. A moment of change divided the community into two groups – the first with dreams of implementing SSDs in their environment, and the second, with SSDs already being part of their infrastructure.
The idea of having your data stored on the associated tier has never been so intriguing. The possibility of granting your mission-critical VM the performance it deserves in the moment of need has never been more appropriate.

Jon Toigo

Data Management Moves to the Fore. Part 4: Why Cognitive Data Management?

In previous installments of this blog, we have deconstructed the idea of cognitive data management (CDM) to identify its “moving parts” and to define what each part contributes to a holistic process for managing files and more structured content. First and foremost, CDM requires a Policy Management Framework that identifies classes of data and specifies their hosting, protection, preservation and privacy requirements of each data class over its useful life.  This component reflects the nature of data, whose access requirements and protection priorities tend to change over time.

Jon Toigo

Data Management Moves to the Fore. Part 3: Data Management Requires Storage Resource and Services Management Too

Previously, we discussed how data might be classified and segregated so that policies could be developed to place data on infrastructure in a deliberative manner – that is,  in a way that optimizes data access, storage resources and services, and storage costs over the useful life of the data itself.  From the standpoint of cognitive data management, data management policies constitute the instructions or programs that the cognitive engine processes to place and move data on and within infrastructure over time.

Taras Shved

Benchmarking Samsung NVMe SSD 960 EVO M.2

Everyone knows that, currently, the SSDs are one of the best storage devices that allow you to upgrade your architecture and significantly accelerate the performance of the computer. SSD accelerates the loading speed of your PC, applications opening and files searching speed, and generally increases the performance of your system. Despite the fact that solid-state drives are more expensive than standard hard drives, the performance improvement can hardly be overlooked.

Bogdan Savchenko

A little about Disk write cache on Windows VM

There are lots of great materials on optimization of virtualized environments that I think many of you will enjoy reading about. Such topics are all over the IT community and they cover a wide range of the technical questions. This article will focus on the matter which is not quite clear yet, especially when it comes from theory to practice. It’s about Windows Disk write cache feature and its implications for data consistency and performance of the virtual hard drives.

Alex Khorolets

RAM Disk technology: Performance Comparison

Since every computer now has a volatile amount of available storage located in the RAM, when compared to other direct-access memory used for data storage, for example, hard disks, CD-RWs, DVD-RWs and the older drum memory, the amount of time used to read/write the data differs in correspondence to the physical location and/or the medium used for reading/recording (rotation speeds and arm movement) the data. The implementation of RAM as a storage provides a list of benefits over other conventional devices, due to the fact of the data being read or written in the same amount of time irrespective of the physical location of data inside the volume. Taken into consideration all the information mentioned above, it would be a crime not to take advantage of the provided conditions.

Vladislav Karaiev

Storage HA on the Cheap: Fixing Synology DiskStation flaky Performance with StarWind Free. Part 3 (Failover Duration)

We are continuing our set of articles dedicated to Synology’s DS916+ mid-range NAS units. Remember we don’t dispute the fact that Synology is capable of delivering a great set of NAS features. Instead of this, we are conducting a number of tests on a pair of DS916+ units to define if they can be utilized as a general-use primary production storage. In Part 1 we have tested the performance of DS916+ in different configurations and determined how to significantly increase the performance of a “dual” DS916+ setup by replacing the native Synology DSM HA Cluster with VSAN from StarWind Free.

Michael Ryom

Setting yourself up for a success with virtualization

I am going to try to address a few issues I have seen quite a lot in my virtualization career. It is not that you have to take extra care when virtualizing, but your virtual environment will never be better than the foundation you build it on. The reason you do not see that many people fuss about it in non-virtualized environments (anymore). I believe, that resources are in abundance today. Well, they were so ten years ago as well, but since then we have only seen higher and higher specification on server hardware. It was the reason for starting to virtualize. Do not get me wrong – Lots of people care about the performance of their virtual and physical environments. Yet some have not set them self up for a successful virtualization project. Let me elaborate…

Alex Bykovskyi

Storage HA on the Cheap: Fixing Synology DiskStation flaky Performance with StarWind Free. Part 2 (Log-Structured File System)

In this article, we are going to continue testing Synology DS916+ with VSAN from StarWind. Our main goal today is to improve the performance of Synology boxes specifically on random patterns. Randoms were chosen for a reason. SQL and OLTP workloads tend to cause huge stress, especially, to spindle arrays, generating a heavily randomized I/O. Patterns we are choosing for today’s benchmark are common for such environments. There are different approaches, which can handle these workload types, such as caching and tiering. Our approach is to build environment with StarWind Log-Structured File System. LSFS was created exactly for this type of environments to improve the performance. We will compare the results we receive to the ones from Part 1 of our research.

Oksana Zybinskaya

The unknown microwave networks

Recently, it became known that there is a private, mysterious network stretching between London and Frankfurt that is twice as fast as the normal Internet. The connection, provided by a series of microwave dishes on masts, was completely secret to anyone but one company. Only when a competitor completed its own microwave link between the two cities, the first company revealed that it too had a link between the cities in order to get a share in this potential market. Similar stories can be found all over the world, but because these networks are privately owned, and because they are often used by financial groups trying to find an edge on the stock market and eke out a few extra billions, you have to investigate hard to find them.