Search

Tag: Hyperconverged

View:
Anton Vovchuk

Converged vs Hyperconverged Infrastructure. What’s the Difference?

Converged infrastructure (CI) simplifies IT with pre-configured systems, while hyperconverged infrastructure (HCI) optimizes resource utilization through a unified software-defined approach. Still unsure what to choose?

Andrea Mauro

What are composable infrastructures?

Having actual physical resources today as a foundation of IT infrastructure starts to look like an extravagant decision, to say the least. Why bother with extra resources, if you can make things happen faster, better, with less hardware footprint? However, virtualization development is far from the end. The latest idea on the horizon is the concept of composable infrastructures, which is more complicated than it sounds.

Artem Gaevoy

Hyperconvergence backline: How to make sure that your hyperconverged environment rocks?

Admins love hyperconvergence because it allows conjoining compute, storage, and networking resources that makes the environment cheaper and easier to manage. Experienced users can build an infrastructure in different ways using any parts. For instance, you can grab some servers from Dell and install an industry-standard hypervisor (Hyper-V, KVM, ESXi, whatever) on top of them. If you do not know that much about hyperconvergence though, consider buying an appliance.

Kevin Soltow

Software-only solutions vs. hardware-based ones: which one will be a perfect fit for your hyperconverged environment?

These days, hyperconverged solutions become increasingly prevalent in small- and medium-size datacenters. And, there’s no wonder: hyperconverged infrastructures (HCI) provide decent reliability and set of features typically associated with large datacenters… but for less money! So, why pay more?

Paulsen Muzari

Whip your Hyperconverged Failover Cluster into shape automatically and with no downtime using Microsoft’s Cluster Aware Updating

Some admins prefer the Cluster updates to be done automatically. To do so, Microsoft designed a feature to facilitate patching of Windows Servers from 2012 to 2016 that are configured in a failover cluster. Cluster Aware Updating (CAU) does this automatically, thereby avoiding service disruption for clustered roles. In this article, we are going to take a look into how we can achieve this assuming that Cluster is built with hyperconverged scenario and VSAN from StarWind used as a shared storage. Before going in the steps to set the CAU, we will investigate this scenario.

Jon Toigo

Back to Enterprise Storage

An under-reported trend in storage these days is the mounting dissatisfaction with server-centric storage infrastructure as conceived by proprietary server hypervisor vendors and implemented as exclusive software-defined storage stacks.  A few years ago, the hypervisor vendors seized on consumer anger around overpriced “value-add” storage arrays to insert a “new” modality of storage, so-called software-defined storage, into the IT lexicon.  Touted as a solution for everything that ailed storage – and as a way to improve virtual machine performance in the process – SDS and hyper-converged infrastructure did rather well in the market.  However, the downside of creating silo’ed storage behind server hosts was that storage efficiency declined by 10 percent or more on an enterprise-wide basis; companies were realizing less bang for the buck with software-defined storage than with the enterprise storage platforms they were replacing.

Jon Toigo

The Need For Liquidity in Data Storage Infrastructure

Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show.  As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used.  Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.” When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability.  High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.

Andrea Mauro

Design a ROBO infrastructure. Part 4: HCI solutions

As written in the previous post, for ROBO scenario the most interesting HCI (Hyper-Converged Infrastructure) configuration is a two nodes configuration, considering that two nodes could be enough to run dozen VMs (or also more). For this reason, not all hyperconverged solutions could be suitable for this case (for example Nutanix or Simplivity need at least 3 nodes). And is not simple scale down an enterprise solution to a small size, due to the architecture constraints.

Jon Toigo

Data Management Moves to the Fore. Part 4: Why Cognitive Data Management?

In previous installments of this blog, we have deconstructed the idea of cognitive data management (CDM) to identify its “moving parts” and to define what each part contributes to a holistic process for managing files and more structured content. First and foremost, CDM requires a Policy Management Framework that identifies classes of data and specifies their hosting, protection, preservation and privacy requirements of each data class over its useful life.  This component reflects the nature of data, whose access requirements and protection priorities tend to change over time.

Andrea Mauro

Design a ROBO infrastructure (Part 3): Infrastructure at remote office side

Design a ROBO scenario must match finally the reality of the customers’ needs, its constraints but also the type of workload and the possible availability solutions of them.