Getting Started with Azure Resource Manager and Azure Deployment – Part II
Posted by Charbel Nemnom on April 28, 2016
5/5 (1)

Introduction

In part one of this multi part blog series, we explained the benefits of Azure Resource Manager and resource groups in Azure V2 versus the Service Management API in Azure V1, then we looked in depth at JavaScript Object Notation (JSON) Quick Start templates. In the second part, Part II: we will create and configure a GitHub account, if you don’t already have one, to host a GitHub repository for a Quick Start template, and lastly we will examine Visual Studio Code integration with Git and push commits to a remote repository. In the final post, we will modify and deploy sample/custom template and parameter JSON files.
If you missed Part I, please make sure to check it here before you continue with this post.

ImageHeader-Part II

Learn More

Please rate this

A closer look at NUMA Spanning and virtual NUMA settings
Posted by Didier Van Hoye on April 28, 2016
5/5 (2)

Introduction

With Windows Server 2012 Hyper-V became truly NUMA aware.  A virtual NUMA topology is presented to the guest operating system. By default, the virtual NUMA topology is optimized by matching the NUMA topology of physical host. This enables Hyper-V to get the optimal performance for virtual machines with high performance, NUMA aware workloads where large numbers of vCPUs and lots of memory come into play. A great and well known example of this is SQL Server.

Non-Uniform Memory Access (NUMA) comes into play in multi-processor systems where not all memory is accessible at the same speed by all the cores. Memory regions are connected either directly to one or more processors. Processors today have multiple cores. Groups of such cores that can access a certain amount of memory at the lowest latency (“local memory”) are called NUMA nodes. A processor has one or more NUMA nodes. When cores have to get memory form another NUMA node, it’s slower (“remote memory”). This allows for more flexibility in serving compute and memory needs which helps to achieve a higher density of VMs per host. This comes at the cost of performance. Modern applications optimize for the NUMA topology where the cores leverage local, high speed memory. As such it’s beneficial to the performance of those applications when running in a VM that this VM has an optimized virtual NUMA layout based on the physical one of the host.

1An overview of CPU sockets, cores, NUMA nodes and K-Groups based on slide deck by Microsoft

Learn More

Please rate this

Virtual Volumes (VVols) backup – how it works and which solutions should be used
Posted by Alex Samoylenko on April 27, 2016
4.5/5 (2)

Many of you have heard of Virtual Volumes (VVols) storage technology, which allows essential increasing of storage I/O performance within VMware vSphere environment by using logical volumes for certain virtual machines components and transferring of some storage operations to disk arrays.

Let’s see how VVols technology impacts virtual machines backup process. First, let’s consider main ways of backup in virtual environments:

  • Backup by mounting virtual disks (Hot Add backup) – in this case VMDK disk of one VM is mounted to another VM and backed up.
  • Backup through Ethernet (the so-called NBD backup) – is a standard VM backup through Ethernet network, when VM snapshot is taken (commands are processed by ESXi host), virtual disk is transferred to the backup target, then snapshot is applied to the base disk (“sticks” to it), and the machine keeps working as before.
  • Backup through SAN network (SAN-to-SAN backup) – in this case a VM snapshot is shot at dedicated server (Backup Server) through special Virtual Disk API mechanism without ESXi host and backup machine involvement to the target storage directly in SAN network without involvement of Ethernet environment.

The last one is the fastest and the most efficient way, but it requires special interfaces like vSphere APIs and Virtual Disk Development Kit, VDDK, which must be available at dedicated server.

1

Learn More

Please rate this

Manage It Already
Posted by Jon Toigo on April 27, 2016
5/5 (1)

As I review the marketing pitches of many software-defined storage products today, I am concerned by the lack of attention in any of the software stack descriptions to any capabilities whatsoever for managing the underlying hardware infrastructure.  This strikes me as a huge oversight.

The truth is that delivering storage services via software — orchestrating and administering the delivery of capacity, data encryption, data protection and other services to the data that are hosted on a software-defined storage volume – is only half of the challenge of storage administration.  The other part is maintaining the health and integrity of the gear and the interconnect cabling that provide the all-important physical underlayment of an increasingly virtualized world.

1

Learn More

Please rate this

Getting Started with Azure Resource Manager and Azure Deployment – Part I
Posted by Charbel Nemnom on April 26, 2016
4.5/5 (2)

Introduction

Applications that are deployed in Microsoft Azure often comprise different but related cloud resources, such as virtual machines, web applications, SQL databases, virtual networks among others. Before the introduction of Azure Resource Manager (Azure V2), it was necessary to define and provision these resources imperatively. However, Azure Resource Manager gives you the ability to define and provision these resources with their configuration and associated parameters declaratively in a JavaScript Object Notation (JSON) template file, known as an Azure Resource Manager template.

In this series of three blog posts, we will show you how to create and deploy Infrastructure as a Service (IaaS) applications using Azure Resource Manager templates.

In this guide, we will explain the benefits of Azure Resource Manager and resource groups, then we will examine and analyze a number of Quick Start Azure Resource Manager templates that are available on GitHub. In the next post, we will create and configure a GitHub account, if you don’t already have one, to host a GitHub repository for a Quick Start template, and lastly we will examine Visual Studio Code integration with Git and push commits to a remote repository.

ImageHeader-Part I

Learn More

Please rate this

SanDisk X400 SSD Review
Posted by Oksana Zybinskaya on April 25, 2016
Tags: , ,
No ratings yet.

SanDisk is one of few companies currently offering 1TB of storage in a single-sided M.2 card – its product  X400 SSD. X400 also comes in a 2.5″ 7mm-height form factor, but the M.2 configuration is the main selling point of this line. 1TB M.2 X400 card allows getting the most out of the ultra-thin notebooks in terms of storage, without sacrificing performance or battery life. X400 uses SanDisk technology called nCache 2.0 which employs multi-tiered architecture and provides better performance during taxing operations like sustained-sequential writing, whereas 2nd generation TLC flash node by Sandisk provides maximum reliability and energy efficiency.

1SanDisk-X400

X400 comes with TCG Opal 2.0 support making it compatible with 3rd-party security Independent Software Vendors. It also has Self-Encrypted Drive capabilities which give users access to hardware-based 256-bit AES encryption. It also has features like DataGuard client and LDPC error correction, which extend its lifespan up to 320 TBWritten.

2SanDisk-X400

X400 demonstrates very good results in different benchmark tests. Especially outstanding results were achieved in 4K random transfers and aligned reads. It also handles Home Theatre PC profile quite well, too (The test includes playing one 720P HD movie in Media Player Classic, one 480P SD movie in VLC, three movies downloading simultaneously through iTunes, and one 1080i HDTV stream being recorded through Windows Media Center over a 15-minute period.). showing 327 MB/s puts it in the middle of the rank comparing to other similar products.

X400 has been designed in order to provide an opportunity to deliver all the benefits of flash while saving system design flexibility owing to M.2 form factor. TLC NAND makes the product price competitive with the top-ranks of the category.

Benefits:

– Unique 1TB drive in M.2 form factor

– Energy efficiency

– Endurance of 320TBW

– Good performance-to-price ratio

Detriments:

-Considerably weak performance for mixed workload

This is the review of an article.

Source: storagereview.com

Related materials:

How to Get All-Flash Performance with Intel SSD and StarWind HyperConverged Appliance

RAID 5 was great, until high-capacity HDDs came into play, but SSDs restored its former glory

Please rate this

5 tips to help you explore the world of PowerShell scripting
Posted by Mike Preston on April 25, 2016
4.5/5 (2)

In 2006 Windows Administrators got their first glimpse into what the world of PowerShell scripting might look like when PowerShell, which was then known as Monad was released under beta conditions to the world.  10 years later we are now into our 5th iteration of the scripting language and have seen a thriving ecosystem form around the Verb-Noun style of automation.  PowerShell is a powerful tool and can be an amazing time-saver to for any Windows administrator to know.  That said, as with any scripting/programming languages getting started can be a little daunting, especially if you have had no scripting experience to fall back on.  Below we will take a look at 5 tips that can save you both time and energy when writing your PowerShell scripts.

Get-Command

There have been numerous times where I have found myself staring blankly at the glowing blue PowerShell console and grinding my brain.  Not trying to figure out how to use a specific cmdlet, but trying to figure out what cmdlet to use.  There are over 1000 cmdlets built in to my default install of PowerShell 4.0 alone, without loading any external modules and snap ins at all.  Trying to tab complete to find the proper one to do the job can be a time consuming, enraging experience.  This is where the Get-Command cmdlet comes in handy.  Get-Command allows us to list out all of the available PowerShell cmdlets within our current console, showing us various tidbits of information about that command such as version, source and name.  Simply running Get-Command by itself (shown below) is not very useful as it will simply list out every single cmdlet available within the console session.

1

Learn More

Please rate this

Extend Active Directory to Microsoft Azure
Posted by Romain Serre on April 21, 2016
5/5 (1)

Extend Active Directory to Microsoft Azure is a common scenario when you implement hybrid cloud. For example, protected VM with Azure Site Recovery may need access to Active Directory even if On-Premise datacenter is unreachable. You can also extend your Active Directory to Azure when you use production workloads in Azure VMs to avoid to implement a new forest or to avoid to use the VPN connection for all Active Directory workloads. In this topic, we will see how to extend the Active Directory to Microsoft Azure.

Architecture overview 

Currently I have an On-Premise datacenter with two domain controllers which host the int.homecloud.net directory. The network subnet is 10.10.0.0/24. In the Microsoft Azure side, I will deploy a Virtual Network with a subnet 10.11.0.0/24. Two Azure VM will be deployed in this network.

Then I will implement a Site-To-Site VPN based on IPSec to connect my datacenter to Virtual Network hosted in Microsoft Azure. Then I will add Azure domain controllers to my domain.

1

Learn More

Please rate this

OMS alerting is now generally available
Posted by Oksana Zybinskaya on April 14, 2016
4/5 (1)

Microsoft Operations Management Suite alerting has moved from preview mode to generally available status.

In addition to numerous bug fixes, some of the new improvements were introduced:

WebHook support: Provides a WebHook URL to send alerts to, which allows integrating with other tools like Slack or a wide variety of incident management tools.

WebHook

Turn alerts on or off: Individual alerts can now be turned on or off.

alerts on/off

Learn More

Please rate this

ReFS virtualization workloads test. Part 1
Posted by Anton Kolomyeytsev on April 12, 2016
5/5 (2)

ReFS (Resilient File System – https://msdn.microsoft.com/en-us/library/windows/desktop/hh848060%28v=vs.85%29.aspx) is Microsoft’s proprietary file system that features enhanced protection from common errors and silent data corruption. Basically, it is a file system that can repair corrupted files on the go if underlying storage is redundant and compatible.

We decided to run some tests to see how well the Resilient File System performs under typical virtualization workload.

With the first test we were going to check I/O performance and to see what exactly influences it. For this purpose, we decided to research I/O behavior in ReFS with FileIntegrity option off and on. The FileIntegrity option stands behind the data protection feature of ReFS, being responsible for scanning and repair processes. So, we were aiming to check if the option is effective for random I/O, typical for virtualization workloads.

During the test we monitored how a ReFS-formatted disk works with FileIntegrity off and FileIntegrity on, while doing random 4K block writes.

Based on the results of the test, we concluded that ReFS with FileIntegrity off works much like a conventional file system, like its predecessor NTFS (https://en.wikipedia.org/wiki/NTFS) in terms of processing random write requests. All the writes were transferred as is, without any changes in LBA request size. So, this mode makes ReFS just a regular file system, well-suited for the modern high-capacity disks and huge files.

Read more…

Related materials:

LSFS Container Techinical Description

Get All-Flash Performance from a Disk with StarWind LSFS Technology

Please rate this

Blog StarWind

Copyright © StarWind Software Inc., 2009-2016. All rights reserved.