Search
StarWind is a hyperconverged (HCI) vendor with focus on Enterprise ROBO, SMB & Edge

ISCSI: LACP vs. MPIO

  • March 31, 2015
  • 12 min read
Anton Kolomyeytsev is StarWind CTO, Chief Architect & Co-Founder. Microsoft Most Valuable Professional [MVP] in Cluster 2014 & 2015. SMB3, NFS, iSCSI & iSER, NVMe over Fabrics.
Anton Kolomyeytsev is StarWind CTO, Chief Architect & Co-Founder. Microsoft Most Valuable Professional [MVP] in Cluster 2014 & 2015. SMB3, NFS, iSCSI & iSER, NVMe over Fabrics.

LACP and MPIO testing scheme

Here is a comparison of two technologies with similar task but different methods of accomplishing it – Link Aggregation Control Protocol (LACP) and Multipath I/O (MPIO). Both are aimed at providing higher throughput when one connection can’t handle the task. To achieve that, LACP bundles several physical ports into a single logical channel. MPIO, on the other hand, utilizes more than one physical path, even if the working application does not support more than one connection. Both technologies seem to be equally effective at first glance, but further study confirms that one of them is better at achieving its goal. The post is practical, so expect detailed research with screenshots and complete analysis of the technologies in a test case.

LACP and MPIO solve the same problem in different ways. It would be a safe assumption to say that the results should also be different. This is actually a draft and we have no idea why it got posted. Well, the question still stands and the work is still in progress, so just stay tuned and do not overreact. The post will remain here so that you people don’t get dead links. Let’s see.

Introduction

LACP and MPIO technologies seem to play the same role:

  1. Providing fault-tolerance.
  2. Increasing performance for particular operations where it’s otherwise not enough.

The methods are different. LACP bundles physical ports together into a bigger channel, basically turning a few small “pipes” into one big “pipe”. MPIO provides up to 32 alternate data paths, achieving pretty much the same effect. As a result, there’s gain in redundancy and performance. This test is going to show which technology is more convenient and effective to use in Windows environment in the specific of using iSCSI target and initiator, used by Microsoft.

Content

Here is the list of hardware and software we’re using for the test.

Hardware:

The following hardware will be used for test configuration:
1х Server Intel core i7 /16Gb RAM/HDD 1TB SATA for OS/ 2x 2 Ports 1Gb NIC for client
1х Server Intel core i7 /16Gb RAM/HDD 1TB SATA for OS/8x 250Gb SSD for storage RAID 0/2х 2 Ports 1Gb NIC for server
RAID Controller – LSI MR936I-8i
NIC – INTEL PRO/1000 PT Dual Port

Scheme:

Software:

Windows Server 2012 R2 on all machines:
iSCSI Target Server – on server only
Multipath-IO
iSCSI Initiator

Network test

Before testing, we have to see if our network itself provides the very throughput it should – 1 Gbps. We’ll utilize the most commonly used IPERF and NTTTCP tools to check it twice. Iperf is one of the most widely-used tools to measure maximum TCP bandwidth. NTTTCP is considered an improved instrument with the same goal, but more abilities. In any case, we just need to check the network bandwidth, so both will do nicely.

IPERF

Running the command on server:
iperf.exe -s –port 921 -w 512K

On client:
iperf.exe -c IP –port 921 –parallel 4 -w 256K -l 64K -t 30

Result:
————————————————————
Client connecting to 192.168.0.202, TCP port 921
TCP window size: 256 KByte
————————————————————
[ ID] Interval Transfer Bandwidth
[SUM] 0.0-30.0 sec 3.30 GBytes 944 Mbits/sec
————————————————————
Client connecting to 172.16.10.10, TCP port 921
TCP window size: 256 KByte
————————————————————
[ ID] Interval Transfer Bandwidth
[SUM] 0.0-30.0 sec 3.30 GBytes 944 Mbits/sec
————————————————————
Client connecting to 172.16.20.10, TCP port 921
TCP window size: 256 KByte
————————————————————
[ ID] Interval Transfer Bandwidth
[SUM] 0.0-30.0 sec 3.30 GBytes 944 Mbits/sec
————————————————————
Client connecting to 172.16.30.10, TCP port 921
TCP window size: 256 KByte
————————————————————
[ ID] Interval Transfer Bandwidth
[SUM] 0.0-30.0 sec 3.30 GBytes 944 Mbits/sec
————————————————————

Here’s what we got with NTTTCP:

Running the command on server:
Ntttcpr.exe –m 4,0, IP

On client:
Ntttcps.exe –m 4,0, IP

Result:

Network activity progressing…
For 1 ip
Total Bytes(MEG) Realtime(s) Average Frame Size Total Throughput(Mbit/s)
================ =========== ================== ========================
5368.709120 45.501 1456.198 948.368
For 2 ip
Total Bytes(MEG) Realtime(s) Average Frame Size Total Throughput(Mbit/s)
================ =========== ================== ========================
5368.709120 43.411 8187.915 989.630
For 3 ip
Total Bytes(MEG) Realtime(s) Average Frame Size Total Throughput(Mbit/s)
================ =========== ================== ========================
5368.709120 44.314 8187.827 1076.695
For 4 ip
Total Bytes(MEG) Realtime(s) Average Frame Size Total Throughput(Mbit/s)
================ =========== ================== ========================
5368.709120 43.404 8187.540 989.704

When going to the doctor, you’d surely visit more than one, because you want at least two opinions on your health, just be sure. That’s why we used two tools and now it’s clear, that our network works fine and is ready for the test.

Adjustment process

SSD RAID parameters:

Creating virtual disk and target in MS ISCSI Target Server:

Adjustments for configuration with MPIO:

Turning MPIO on. Connecting iSCSI target through initiator to 1 address (testing), 2 addresses (testing) and 4 addresses (testing again).

Adjustments for configuration with LACP

Creating new team, using 2 adapters at first (on the first and the second servers) – testing and adding 2 more adapters in the team – testing again.

We tried different Teaming mode and Load balancing mode settings during the test. Results are the same.

A new virtual adapter appears in the system. Setting IP address (172.16.40.10 on the first server and 172.16.40.20 on the other). Jumbo-frames go through. Checking with IPERF and getting 2 Gb/s, which is a great result.

————————————————————
Client connecting to 172.16.40.10, TCP port 921
TCP window size: 256 KByte
————————————————————
[ ID] Interval Transfer Bandwidth
[SUM] 0.0-30.0 sec 6.87 GBytes 1.97 Gbits/sec

Tests show that LACP doesn’t scale with iSCSI, and MS iSCSI Target doesn’t support Multiple Connections per Session.

We may use MPIO as a workaround, creating a number of sessions on one IP address.

Results

IOMETER – RAW DISK 1.8 TB, 64 I/O, test time -5 min.

Conclusion

Both LACP and MPIO provide the promised redundancy, offering failover without user’s involvement. It is a good thing, but when it comes to performance, it is clear that MPIO wins the competition. The more data paths it uses, the better the throughput will be.

LACP has a serious drawback, though. While it can work with iSCSI now, it still doesn’t support Multiple Connections per Session (MCS) in case with Microsoft, thus it does not scale. Therefore, LACP doesn’t give any performance boost here, if you don’t use MPIO with it as well. Of course, using both the technologies is simply not reasonable – you can just take MPIO instead. In any case, LACP will do fine for a target that supports MCS (we’ll check it in some of our tests to come). Our conclusion in this very case is that MPIO is better than LACP when such a configuration is considered.

Found Anton’s article helpful? Looking for a reliable, high-performance, and cost-effective shared storage solution for your production cluster?
Dmytro Malynka
Dmytro Malynka StarWind Virtual SAN Product Manager
We’ve got you covered! StarWind Virtual SAN (VSAN) is specifically designed to provide highly-available shared storage for Hyper-V, vSphere, and KVM clusters. With StarWind VSAN, simplicity is key: utilize the local disks of your hypervisor hosts and create shared HA storage for your VMs. Interested in learning more? Book a short StarWind VSAN demo now and see it in action!