Why WAN Acceleration Is Not Enough for VDI Success

The purpose of this article is to highlight why plain old WAN acceleration (Riverbed, Citrix, Cisco, etc etc) is not enough for optimizing end user experience at remote sites within a virtual desktop infrastructure implementation. What IT needs, and more importantly users need is a way to configure settings that dynamically respond to the user’s choice of applications as their workload changes.  For instance, if the user is viewing YouTube or other multimedia sites the QoS tag can be dynamically altered to a lower priority, but when they switch back to an application like SAP for instance, the QoS tag can be altered again to raise the priority of the traffic.

Overview

Providing network communication Quality of Service (QoS) guarantees in VDI is a significant problem.  Whether it’s a terminal, thin client, repurposed PC or traditional PC with terminal client software, communication to and from the VM is facilitated through a connection brokering technology or “infrastructure access” package. Common protocols include PC-over-IP, Remote Desktop Protocol (RDP) and ICA.

In traditional physical PC network architectures, QoS guarantees are achieved through a standard called Differentiated Services or DiffServ. DiffServ is a mechanism for classifying and managing network traffic. The goal is to provide guaranteed service (GS) to critical network traffic such as voice and email while providing “best-effort” traffic guarantees to lower priority non-critical services such as web traffic or file transfers. 

DiffServ operates on a traffic classification principal where individual data packets are placed into a limited number of classes. Each router is configured to differentiate traffic based on its class and each class can be managed in different ways. This ensures that critical network traffic get priority over other non-critical traffic.
Per-Hop Behavior (PHB) is indicated by encoding a 6-bit value called the Differentiated Services Code Point (DSCP) into the 8-bit Differentiated Services (DS) field of the IP packet header. While a network could employ up to 64 different traffic classes utilizing different marking in the DSCP, the common and practical standard is employing the following 4 Per-Hop Behaviors.

• Default PHB: Which is typically best-effort traffic
• Expedited Forwarding (EF) PHB: Dedicated to low-loss, low-latency traffic
• Assured Forwarding (AF) PHB: Which gives assurance of delivery under conditions
• Class Selector PHBs: Which are defined to maintain backward compatibility with the IP Precedence field.

Default PHB
A default PHB is the only required behavior. In general, traffic that does not meet the requirements of any of the other defined classes is placed in the default PHB. Typically, the default PHB has best-effort forwarding characteristics. The recommended DSCP for the default PHB is ‘000000’ (in binary).

Expedited Forwarding (EF) PHB – DSCP= (46 OR 101110)
The EF PHB has the characteristics of low delay, low loss and low jitter. These characteristics are suitable for voice, video and other real-time services. EF traffic is often given “strict priority queuing” above all other traffic classes. Because an overload of EF traffic will cause queuing delays and affect the jitter and delay tolerances within the class, EF traffic is often strictly controlled through admission control, policing and other mechanisms. Typical networks will limit EF traffic to no more than 30% and often much less of the capacity of a link. For more information the IETF defines Expedited Forwarding behavior in RFC 3246.

Assured Forwarding (AF) PHB
For more information the IETF defines the Assured Forwarding behavior in RFC 2597.

Class Selector PHB
Class Selector PHB provides for legacy standard support in DiffServ. Before the DiffServ standard, IP networks could use the Precedence field in the Type of Service (TOS) byte of the IP header to mark priority traffic. The TOS byte and IP precedence was not used to large degree. The IETF agreed to reuse the TOS byte as the DS field for DiffServ networks. In order to maintain backward compatibility with network devices that still use the Precedence field, DiffServ defines the Class Selector PHB. 

VDI/QoS Problem

In a traditional physical PC network architecture, applications and their associated data are assigned specific ports on which to communicate. Web traffic is generally port 80, SQL traffic is port 1433, FTP traffic is port 22 etc. Network routers take advantage of traffic segregation by port to prioritize business critical traffic over non-critical traffic.
The problem with QoS in today’s VDI implementations resides in the underlying display protocol. Communication to and from the VM to the thin client or terminal is facilitated through a single protocol communicating on a single port, regardless of the application in use. All network traffic is treated the same, whether it is video, email, file transfer or some other application. As an example, if you’re using PC-over-IP to connect your thin client to a VM, the data that flows between the two devices communicates via the PC-over-IP protocol on port 50002. There is no differentiation between the various applications in use, because all traffic to the VM is on the same port and is therefore treated with the same priority. For example, a person streaming a YouTube video will consume a predominate amount of available bandwidth, thus impacting users performing their regular business related computing activity. The router cannot segregate and prioritize the traffic under this scenario. Further complicating matters, the user frequently changes applications over the same connection over time.

So how do you prioritize the traffic so as not to adversely impact the productive users from those that are running bandwidth intensive applications or websites?  With Lakeside Software’s SysTrack you can easily configure DiffServ priority based on the application or website with which the user is actively working. Priority would then be dynamically applied to the Windows TCP/IP stack, resulting in packets being DS-tagged according to the application in use and priority assigned. Unique to this solution, priority is dynamically established based on applications being executed.

Let’s use the above example, running various applications over PC-over-IP on port 50002. We simply define specific applications and the priority that they should receive. We might decide that SAP traffic and a particular URL associated with a company web application has a higher priority than other traffic. With a simple SysTrack configuration, we assign an Expedited Forwarding (EF) Per-Hop Behavior DSCP value of 46. This results in the DS field for IP packets that support high-priority applications having a higher priority tag at the router level, even though all traffic operates over port 50002.

In the example above, if we have a 128KB WAN link between the terminal and the VM, although the data stream is all communicating via PC-over-IP on port 50002, when the YouTube video data and SAP data is routed, the router can prioritize the important traffic by looking at the DS field tag.

Conclusion
As virtual desktop infrastructure implementations begin to grow beyond just proof-of-concepts and limited production pilots the branch office and WAN considerations will take on a higher priority.  WAN acceleration via the prominent vendors in the space is not enough to ensure that the data infrastructure is adaptable and dynamic.  The only way to reach this level of adaptability is to have a firm grasp of the workload and the deep analytic capabilities about that workload to drive infrastructure performance to benefit the users.

Advertisements

5 thoughts on “Why WAN Acceleration Is Not Enough for VDI Success

  1. You raise some good points about adjusting WAN accelerator settings based on applications. However, in addition to hardware WAN accelerators, there are lower cost solutions for improving remote user experience in VDI. Ericom Blaze is a software based RDP Accelerator which can work standalone and also in conjunction with WAN Accelerators and add a lot of value to the network.

    Ericom Blaze is less expensive than hardware-based WAN accelerators because it is a software-only solution, and does not require specialized hardware.

    To top it off, Ericom Blaze is very easy to deploy and install, and usually does not require any configuration. In most cases, Ericom Blaze can be downloaded, installed and ready for use within minutes.

    Read more about Blaze and download a free evaluation at:
    http://www.ericom.com/ericom_blaze.asp?URL_ID=708

    Adam

  2. I think your comments do apply to all the WAN opt vendors other than Riverbed (and perhaps Citrix). However you have too simplistic a model for how Riverbed’s Application layer module for Citrix works. And of course you’re ignoring the potential compression/dedup benefits the WAN opt vendors will add.

    Is SysTrack changing the VDI connection into multiple streams? If not, I’m puzzled how just changing the DSCP values will help much, as TCP will require the packets to arrive to make progress (in order).

    I do think the idea of looking deeper into specific URLs within the session, that’s neat.
    [Full disclosure: Riverbed Employee]

    • What we do is fully compatible with opt vendors; what you (Riverbed) do is great, and is probably essential in many environments. Where we come in is when you just don’t have enough bandwidth to get the job done, even after optimization.

      We don’t decompose into multiple streams. We are not concerned so much with the individual importance of the apps that one user is running, but rather the relative importance of the traffic from one user as compared to others. The idea is that we want to prioritize traffic for users working on the most critical apps, at the expense of those users working on less (or non-) critical apps. So as a user changes the app that they are working with, the priority of the user and how their stream is handled also changes.

      Since we don’t change the stream, but just adjust the priority of the whole stream, there’s nothing to worry about in terms of TCP packet ordering. In fact, this approach works on UDP flows as well as TCP.

  3. Pingback: myvirtualcloud.net » Weekly VDI Digest March 22, 2011

  4. Undeniably consider that that you said. Your favourite justification appeared to be on the web the simplest thing to understand of. I say to you, I certainly get annoyed whilst other folks think about worries that they just do not realize about. You managed to hit the nail upon the top and defined out the entire thing without having side effect , folks could take a signal. Will likely be again to get more. Thank you

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s