Getting Started with Azure Resource Manager and Azure Deployment – Part II
Posted by Charbel Nemnom on
April 28, 2016
If you missed Part I, please make sure to check it here before you continue with this post.
A closer look at NUMA Spanning and virtual NUMA settings
Posted by Didier Van Hoye on
April 28, 2016
With Windows Server 2012 Hyper-V became truly NUMA aware. A virtual NUMA topology is presented to the guest operating system. By default, the virtual NUMA topology is optimized by matching the NUMA topology of physical host. This enables Hyper-V to get the optimal performance for virtual machines with high performance, NUMA aware workloads where large numbers of vCPUs and lots of memory come into play. A great and well known example of this is SQL Server.
Non-Uniform Memory Access (NUMA) comes into play in multi-processor systems where not all memory is accessible at the same speed by all the cores. Memory regions are connected either directly to one or more processors. Processors today have multiple cores. Groups of such cores that can access a certain amount of memory at the lowest latency (“local memory”) are called NUMA nodes. A processor has one or more NUMA nodes. When cores have to get memory form another NUMA node, it’s slower (“remote memory”). This allows for more flexibility in serving compute and memory needs which helps to achieve a higher density of VMs per host. This comes at the cost of performance. Modern applications optimize for the NUMA topology where the cores leverage local, high speed memory. As such it’s beneficial to the performance of those applications when running in a VM that this VM has an optimized virtual NUMA layout based on the physical one of the host.
An overview of CPU sockets, cores, NUMA nodes and K-Groups based on slide deck by Microsoft
Virtual Volumes (VVols) backup – how it works and which solutions should be used
Posted by Alex Samoylenko on
April 27, 2016
Many of you have heard of Virtual Volumes (VVols) storage technology, which allows essential increasing of storage I/O performance within VMware vSphere environment by using logical volumes for certain virtual machines components and transferring of some storage operations to disk arrays.
Let’s see how VVols technology impacts virtual machines backup process. First, let’s consider main ways of backup in virtual environments:
- Backup by mounting virtual disks (Hot Add backup) – in this case VMDK disk of one VM is mounted to another VM and backed up.
- Backup through Ethernet (the so-called NBD backup) – is a standard VM backup through Ethernet network, when VM snapshot is taken (commands are processed by ESXi host), virtual disk is transferred to the backup target, then snapshot is applied to the base disk (“sticks” to it), and the machine keeps working as before.
- Backup through SAN network (SAN-to-SAN backup) – in this case a VM snapshot is shot at dedicated server (Backup Server) through special Virtual Disk API mechanism without ESXi host and backup machine involvement to the target storage directly in SAN network without involvement of Ethernet environment.
The last one is the fastest and the most efficient way, but it requires special interfaces like vSphere APIs and Virtual Disk Development Kit, VDDK, which must be available at dedicated server.
Manage It Already
Posted by Jon Toigo on
April 27, 2016
As I review the marketing pitches of many software-defined storage products today, I am concerned by the lack of attention in any of the software stack descriptions to any capabilities whatsoever for managing the underlying hardware infrastructure. This strikes me as a huge oversight.
The truth is that delivering storage services via software — orchestrating and administering the delivery of capacity, data encryption, data protection and other services to the data that are hosted on a software-defined storage volume – is only half of the challenge of storage administration. The other part is maintaining the health and integrity of the gear and the interconnect cabling that provide the all-important physical underlayment of an increasingly virtualized world.
Getting Started with Azure Resource Manager and Azure Deployment – Part I
Posted by Charbel Nemnom on
April 26, 2016
In this series of three blog posts, we will show you how to create and deploy Infrastructure as a Service (IaaS) applications using Azure Resource Manager templates.
In this guide, we will explain the benefits of Azure Resource Manager and resource groups, then we will examine and analyze a number of Quick Start Azure Resource Manager templates that are available on GitHub. In the next post, we will create and configure a GitHub account, if you don’t already have one, to host a GitHub repository for a Quick Start template, and lastly we will examine Visual Studio Code integration with Git and push commits to a remote repository.
SanDisk X400 SSD Review
Posted by Oksana Zybinskaya on
April 25, 2016
SanDisk is one of few companies currently offering 1TB of storage in a single-sided M.2 card – its product X400 SSD. X400 also comes in a 2.5″ 7mm-height form factor, but the M.2 configuration is the main selling point of this line. 1TB M.2 X400 card allows getting the most out of the ultra-thin notebooks in terms of storage, without sacrificing performance or battery life. X400 uses SanDisk technology called nCache 2.0 which employs multi-tiered architecture and provides better performance during taxing operations like sustained-sequential writing, whereas 2nd generation TLC flash node by Sandisk provides maximum reliability and energy efficiency.
X400 comes with TCG Opal 2.0 support making it compatible with 3rd-party security Independent Software Vendors. It also has Self-Encrypted Drive capabilities which give users access to hardware-based 256-bit AES encryption. It also has features like DataGuard client and LDPC error correction, which extend its lifespan up to 320 TBWritten.
X400 demonstrates very good results in different benchmark tests. Especially outstanding results were achieved in 4K random transfers and aligned reads. It also handles Home Theatre PC profile quite well, too (The test includes playing one 720P HD movie in Media Player Classic, one 480P SD movie in VLC, three movies downloading simultaneously through iTunes, and one 1080i HDTV stream being recorded through Windows Media Center over a 15-minute period.). showing 327 MB/s puts it in the middle of the rank comparing to other similar products.
X400 has been designed in order to provide an opportunity to deliver all the benefits of flash while saving system design flexibility owing to M.2 form factor. TLC NAND makes the product price competitive with the top-ranks of the category.
– Unique 1TB drive in M.2 form factor
– Energy efficiency
– Endurance of 320TBW
– Good performance-to-price ratio
-Considerably weak performance for mixed workload
This is the review of an article.
How to Get All-Flash Performance with Intel SSD and StarWind HyperConverged Appliance
RAID 5 was great, until high-capacity HDDs came into play, but SSDs restored its former glory
5 tips to help you explore the world of PowerShell scripting
Posted by Mike Preston on
April 25, 2016
In 2006 Windows Administrators got their first glimpse into what the world of PowerShell scripting might look like when PowerShell, which was then known as Monad was released under beta conditions to the world. 10 years later we are now into our 5th iteration of the scripting language and have seen a thriving ecosystem form around the Verb-Noun style of automation. PowerShell is a powerful tool and can be an amazing time-saver to for any Windows administrator to know. That said, as with any scripting/programming languages getting started can be a little daunting, especially if you have had no scripting experience to fall back on. Below we will take a look at 5 tips that can save you both time and energy when writing your PowerShell scripts.
There have been numerous times where I have found myself staring blankly at the glowing blue PowerShell console and grinding my brain. Not trying to figure out how to use a specific cmdlet, but trying to figure out what cmdlet to use. There are over 1000 cmdlets built in to my default install of PowerShell 4.0 alone, without loading any external modules and snap ins at all. Trying to tab complete to find the proper one to do the job can be a time consuming, enraging experience. This is where the Get-Command cmdlet comes in handy. Get-Command allows us to list out all of the available PowerShell cmdlets within our current console, showing us various tidbits of information about that command such as version, source and name. Simply running Get-Command by itself (shown below) is not very useful as it will simply list out every single cmdlet available within the console session.
Extend Active Directory to Microsoft Azure
Posted by Romain Serre on
April 21, 2016
Extend Active Directory to Microsoft Azure is a common scenario when you implement hybrid cloud. For example, protected VM with Azure Site Recovery may need access to Active Directory even if On-Premise datacenter is unreachable. You can also extend your Active Directory to Azure when you use production workloads in Azure VMs to avoid to implement a new forest or to avoid to use the VPN connection for all Active Directory workloads. In this topic, we will see how to extend the Active Directory to Microsoft Azure.
Currently I have an On-Premise datacenter with two domain controllers which host the int.homecloud.net directory. The network subnet is 10.10.0.0/24. In the Microsoft Azure side, I will deploy a Virtual Network with a subnet 10.11.0.0/24. Two Azure VM will be deployed in this network.
Then I will implement a Site-To-Site VPN based on IPSec to connect my datacenter to Virtual Network hosted in Microsoft Azure. Then I will add Azure domain controllers to my domain.
OMS alerting is now generally available
Posted by Oksana Zybinskaya on
April 14, 2016
Microsoft Operations Management Suite alerting has moved from preview mode to generally available status.
In addition to numerous bug fixes, some of the new improvements were introduced:
WebHook support: Provides a WebHook URL to send alerts to, which allows integrating with other tools like Slack or a wide variety of incident management tools.
Turn alerts on or off: Individual alerts can now be turned on or off.
ReFS virtualization workloads test. Part 1
Posted by Anton Kolomyeytsev on
April 12, 2016
ReFS (Resilient File System – https://msdn.microsoft.com/en-us/library/windows/desktop/hh848060%28v=vs.85%29.aspx) is Microsoft’s proprietary file system that features enhanced protection from common errors and silent data corruption. Basically, it is a file system that can repair corrupted files on the go if underlying storage is redundant and compatible.
We decided to run some tests to see how well the Resilient File System performs under typical virtualization workload.
With the first test we were going to check I/O performance and to see what exactly influences it. For this purpose, we decided to research I/O behavior in ReFS with FileIntegrity option off and on. The FileIntegrity option stands behind the data protection feature of ReFS, being responsible for scanning and repair processes. So, we were aiming to check if the option is effective for random I/O, typical for virtualization workloads.
During the test we monitored how a ReFS-formatted disk works with FileIntegrity off and FileIntegrity on, while doing random 4K block writes.
Based on the results of the test, we concluded that ReFS with FileIntegrity off works much like a conventional file system, like its predecessor NTFS (https://en.wikipedia.org/wiki/NTFS) in terms of processing random write requests. All the writes were transferred as is, without any changes in LBA request size. So, this mode makes ReFS just a regular file system, well-suited for the modern high-capacity disks and huge files.
LSFS Container Techinical Description
Get All-Flash Performance from a Disk with StarWind LSFS Technology