Search

Tag: SDS

View:
Artem Gaevoy

Hyperconvergence backline: How to make sure that your hyperconverged environment rocks?

Admins love hyperconvergence because it allows conjoining compute, storage, and networking resources that makes the environment cheaper and easier to manage. Experienced users can build an infrastructure in different ways using any parts. For instance, you can grab some servers from Dell and install an industry-standard hypervisor (Hyper-V, KVM, ESXi, whatever) on top of them. If you do not know that much about hyperconvergence though, consider buying an appliance.

Dima Yaprincev

Microsoft SQL Server Failover Cluster Instance and Basic Availability Group features comparison

Microsoft SQL Server 2016 has a pretty decent feature set to achieve cost-effective high availability for your environment and build a reliable disaster recovery solution. Basic Availability Groups (BAGs) and Failover Cluster Instances (FCI) are included in SQL Server 2016 Standard Edition and serve to implement high redundancy level for business-critical databases. In this article, I would like to discuss some differences between these solutions and open the curtain on how it can be done with Software-Defined Storage like Storage Spaces Direct (S2D) and VSAN from StarWind (StarWind VSAN).

Jon Toigo

Back to Enterprise Storage

An under-reported trend in storage these days is the mounting dissatisfaction with server-centric storage infrastructure as conceived by proprietary server hypervisor vendors and implemented as exclusive software-defined storage stacks.  A few years ago, the hypervisor vendors seized on consumer anger around overpriced “value-add” storage arrays to insert a “new” modality of storage, so-called software-defined storage, into the IT lexicon.  Touted as a solution for everything that ailed storage – and as a way to improve virtual machine performance in the process – SDS and hyper-converged infrastructure did rather well in the market.  However, the downside of creating silo’ed storage behind server hosts was that storage efficiency declined by 10 percent or more on an enterprise-wide basis; companies were realizing less bang for the buck with software-defined storage than with the enterprise storage platforms they were replacing.

Ivan Talaichuk

Hyperconvergence – another buzzword or the King of the Throne?

Before we have started our journey through the storage world, I would like to begin with a side note on what is hyperconverged infrastructure and which problems this cool word combination really solves. Folks who already took the grip on hyperconvergence can just skip the first paragraph where I’ll describe HCI components plus a backstory about this tech. Hyperconverged infrastructure (HCI) is a term coined by two great guys: Steve Chambers and Forrester Research (at least Wiki said so). They’ve created this word combination in order to describe a fully software-defined IT infrastructure that is capable of virtualizing all the components of conventional ‘hardware-defined’ systems.

Jon Toigo

The Pleasant Fiction of Software-Defined Storage

Whether you have heard it called software-defined storage, referring to a stack of software used to dedicate an assemblage of commodity storage hardware to a virtualized workload, or hyper-converged infrastructure (HCI), referring to a hardware appliance with a software-defined storage stack and maybe a hypervisor pre-configured and embedded, this “revolutionary” approach to building storage was widely hailed as your best hope for bending the storage cost curve once and for all.  With storage spending accounting for a sizable percentage – often more than 50% — of a medium-to-large organization’s annual IT hardware budget, you probably welcomed the idea of an SDS/HCI solution when the idea surfaced in the trade press, in webinars and at conferences and trade shows a few years ago.

Jon Toigo

The Need For Liquidity in Data Storage Infrastructure

Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show.  As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used.  Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.” When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability.  High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.

Alex Bykovskyi

Ceph-all-in-one

This article describes the deployment of a Ceph cluster in one instance or as it’s called “Ceph-all-in-one”. As you may know, Ceph is a unified Software-Defined Storage system designed for great performance, reliability, and scalability. With the help of Ceph, you can build an environment with the desired size. You can start with a one-node system and there are no limits in its sizing. I will show you how to build the Ceph cluster on top of one virtual machine (or instance). You should never use such scenario in production, only for testing purposes. The series of articles will guide you through the deployment and configuration of different Ceph cluster builds.

Augusto Alvarez

Microsoft Azure Stack in General Availability (GA) and Customers will Receive it in September. Why is this Important? Part I

Microsoft’s Hybrid Cloud appliance to run Azure in your datacenter has finally reached to General Availability (GA) and the Integration Systems (Dell EMC, HPE and Lenovo for this first iteration) are formally taking orders from customers, which will receive their Azure Stack solution in September. But, what exactly represents Azure Stack? Why is this important to organizations?

Andrea Mauro

Design a ROBO infrastructure. Part 4: HCI solutions

As written in the previous post, for ROBO scenario the most interesting HCI (Hyper-Converged Infrastructure) configuration is a two nodes configuration, considering that two nodes could be enough to run dozen VMs (or also more). For this reason, not all hyperconverged solutions could be suitable for this case (for example Nutanix or Simplivity need at least 3 nodes). And is not simple scale down an enterprise solution to a small size, due to the architecture constraints.

Jon Toigo

Data Management Moves to the Fore. Part 3: Data Management Requires Storage Resource and Services Management Too

Previously, we discussed how data might be classified and segregated so that policies could be developed to place data on infrastructure in a deliberative manner – that is,  in a way that optimizes data access, storage resources and services, and storage costs over the useful life of the data itself.  From the standpoint of cognitive data management, data management policies constitute the instructions or programs that the cognitive engine processes to place and move data on and within infrastructure over time.