• Category Archives View
  • Why vGPU is a Requirement for VDI

    … or at least it will be..
     
    I am not saying this is a requirement today for every use case or workload but I think in some ways it will be standard. Recently a conversation on twitter from a few folks I highly respect instigated this thought exercise. Today vGPU isn’t even a capability with vSphere (though it is coming) though vSphere does have vDGA and vSGA for graphics acceleration. XenServer has had vGPU since 2013 where it was announced as a tech preview with 6.2 but let’s take it back a step on what vGPU is first, and then I will present my irrational thoughts on the matter.

    First off lets start at the beginning…

    So what is vGPU – From NVIDIA’s web page

    NVIDIA GRID™ vGPU™ brings the full benefit of NVIDIA hardware-accelerated graphics to virtualized solutions. This technology provides exceptional graphics performance for virtual desktops equivalent to local PCs when sharing a GPU among multiple users.

    GRID vGPU is the industry’s most advanced technology for sharing true GPU hardware acceleration between multiple virtual desktops—without compromising the graphics experience. Application features and compatibility are exactly the same as they would be at the desk.

    With GRID vGPU technology, the graphics commands of each virtual machine are passed directly to the GPU, without translation by the hypervisor. This allows the GPU hardware to be time-sliced to deliver the ultimate in shared virtualized graphics performance

    So to break that down…

    NVIDIA came up with some really cool graphics cards that you could split up the graphical ability of the card to multiple virtual machines directly which greatly improves the performance. The NVIDIA Grid K1 and K2 cards designed for just this purpose.

    Example of what vGPU can do..

    Gunnar Berger (CTO of @Citrix Desktops and Applications Group did a great video on Youtube when he was an analyst with Gartner on comparing vSGA and vGPU. I highly recommend checking out other videos he has posted as well on this and other subjects.

    So back to the original topic at hand..

    Oone only needs to sit and reflect on the history and evolution of desktop PCs and see that times are changing. Browsers, Microsoft Office and other programs all benefit and are accelerated by GPUs. This is not solely relegated to the likes of those working with digital images, AUTOCAD, Solidworks, MATLAB, GIS programs etc. Sure vGPU is designed to be able to handle these workloads. One might call these graphic intensive programs the last mile of desktop virtualization, i.e. workloads that were bad fits for VDI. But in my mind this is just the beginning as almost every program out there begins to take advantage of the almighty GPU.

    As the desktop progresses and adds capability so must VDI to be able to even keep up. Many people strive for equal or better than desktop performance but even today’s cheapest laptops and desktops come with HD video card chipsets and share the ever increasing on board RAM. I just purchased a PC for one my many children to build him a gaming machine, he is using the on-board card for now and running games like Skyrim, Minecraft (uses more GPU than you think, go look at these FPS charts based on the video cards). Sure your typical office worker may not be playing games or maybe they are…

    Software developers are NOT designing their programs to look simple any more whether it be a web app or good old installable based application. They are designing them to run fast and look great and using all of the resources at their disposal including hardware GPUs. They are not trying to design programs that only run in a virtual desktop. 

    How can we deliver even equal performance to the desktop they have today without giving these capabilities when even the core applications like Microsoft Office and your Browser (which many apps are now rendered in) are using hardware acceleration via your GPU. Look at products like HP Moonshot that give dedicated quad core CPU / 8 GB of RAM and an integrated Radeon GPU. The writing is on the walls, GPU in VDI is here to stay. Were just at the beginning of the curve.

    So I submit that GPU is a requirement, please feel free to share your thoughts on this. 


  • Varrow Madness Part 3: The Labs (behind the scenes)

    Varrow Madness had a heavy emphasis put on hands on labs this year. We Varrowites definitely like getting our hands dirty and we wanted to give you the chance to get your hands on the products as well. We wanted to give Madness attendees a taste of both VMware View and Citrix XenDesktop. and to see the technical pieces of the provisioning process required in order to deliver a virtual desktop to your users.

    Our overall vision for how the labs would run at Madness this year were built around our experiences with the hands on labs available at the major vendor conferences, such as VMworld, PEX as well as Citrix Synergy. Comparing our labs to these vendors set the bar pretty high and i think we achieved a very well developed lab. The Varrow Hosted labs for the EUC practice were not the only labs available at Varrow Madness either, attendees also had access to the full VMware Hands on Lab list hosted in the Cloud, the same revered labs that were available at VMware PEX to partners.

    Varrow Labs Behind the Scenes
     
    Dave Lawrence, Director of End User Computing at Varrow volunteered me to assist with putting the labs together several weeks before Madness. I was pretty excited about tackling this opportunity so I jumped at the chance.

    I wanted to provide details on the hardware and software setup that we used to accomplish delivering twenty plus isolated environments. Maybe this will inspire you on a way that you could use vCloud Director in your environment to enable your users, maybe spinning up multiple isolated or fenced vApps for your developers, testing a new product or a temporary project, the use cases are only limited to what you can imagine 🙂

    Lab Hardware 

    Compute

    • (2) – 6120XP FI
    • (2) – 2104XP IO Modules
    • UCS5108 Chassis
      • Blade 1 – B200 M1
        • (2) X5570 – 2.933GHz
        • 32GB 1333MHz RAM
        • N20-AC0002 – Cisco UCS M81KR Virtual Interface Card – Network adapter – 10 Gigabit LAN, FCoE – 10GBase-KR
      • Blade 2 – B200 M1
        • (2) X5570 – 2.933GHz
        • 32GB 1333MHz RAM
        • N20-AC0002 – Cisco UCS M81KR Virtual Interface Card – Network adapter – 10 Gigabit LAN, FCoE – 10GBase-KR
      • Blade 3 – B200 M2
        • (2) E5649 – 2.533GHz
        • 48GB 1333MHz RAM
        • N20-AC0002 – Cisco UCS M81KR Virtual Interface Card – Network adapter – 10 Gigabit LAN, FCoE – 10GBase-KR
      • Blade 4 – B200 M1
        • (2) X5570 – 2.933GHz
        • 24GB 1333MHz RAM
        • N20-AQ0002 – Cisco UCS M71KR-E Emulex Converged Network Adapter (LIMITED TO TWO 10GB NICS)
    • UCS5108 Chassis
      • Blade 1 – B200 M1
        • (2) X5570 – 2.933GHz
        • 24GB 1333MHz RAM
        • N20-AC0002 – Cisco UCS M81KR Virtual Interface Card – Network adapter – 10 Gigabit LAN, FCoE – 10GBase-KR
      • Blade 2 – B200 M2
        • (2) E5649 – 2.533GHz
        • 48GB 1333MHz RAM
        • N20-AC0002 – Cisco UCS M81KR Virtual Interface Card – Network adapter – 10 Gigabit LAN, FCoE – 10GBase-KR
      •  Blade 3 -B200 M3
        • (2) E5-2630 12 Core CPUs
        • 96GB 1600Mhz
        • MLOM VIC 
      •  Blade 4 -B200 M3
        • (2) E5-2630 12 Core CPUs
        • 96GB 1600Mhz
        • MLOM VIC

     Storage

    • VNX5300
      • (16) – 2TB 7.2K NL SAS Drives
      • (5) – 100GB SSD – Configured as Fast Cache
      • (25) – 600GB 10K SAS 2.5″ 

    Networking

    • N7K-M148GS-11L – 48 Port Gigabit Ethernet Module (SFP)
    • N7K-M108X2-12L – 8 Port 10 Gigabit Ethernet Module with XL Option
    • N7K-M132XP-12L – 32 Port 10GbE with XL Option, 80G Fabric
    • N7K-M132XP-12L – 32 Port 10GbE with XL Option, 80G Fabric
    • N7K-SUP1 – Supervisor Module
    • N7K-SUP1 – Supervisor Module
    • N7K-M108X2-12L – 8 Port 10 Gigabit Ethernet Module with XL Option
    • SG300-28 – GigE Managed Small Business Switch
    • DS-C9124-K9 – MDS 9124 Fabric Switch
    • DS-C9148-16p-K9 – MDS 9148 Fabric Switch
    • Nexus 5010
    • w/ N5K-M1008 – 4GB FC Expansion Module
    • Nexus 2148T Fabric Extender
    • Connected to 5010

    Thin Clients
    Cisco provided the thin clients that were used in the labs. The thin clients were capable of RDP, ICA as well as PCoIP connections. Each thin client station connected via Remote Desktop to a NAT IP into their assigned vCenter. The thin clients were Power over ethernet drawing very little power and may be representative in what you would deploy if you were planning a thin client deployment.

    vCloud Director
    Each UCS blade had ESXi installed all managed by a virtualized VMware vCenter 5.1. We used vCloud Director appliance to build the vApp for each lab and created Catalogs for each lab environment that we could check out for each workstation. We also had vCenter Operations Manager deployed so that we could monitor the lab environment.

    vApp Design 
    Each vApp or Pod in both the XenDesktop and the VMware View environment consisted of four virtual machines at the beginning. Each Pod or vApp was fenced off from the other to give each user their own isolated EUC environment. The networks on the vDistributed Switch for each vApp were dynamically built as each vApp was powered on.

    VMware View

    • Domain Controller
    • VirtualCenter – SQL, VC 5.1 – users remote to this workstation via a NAT IP to run through the lab
    • View Connection Server – View Composer
    • ESXi Server – hosting desktops – Virtualized ESXi

    Citrix XenDesktop

    • Domain Controller
    • VirtualCenter – SQL, VC 5.1 – users remote to this workstation via a NAT IP to run through the lab
    • Citrix Desktop Delivery Controller, Citrix Provisioning Server
    • ESXi Server – hosting desktops – Virtualized ESXi 

    The general script for both platforms would be to..

    1. Connect to vCenter from thin client via RDP
    2. Provision multiple desktops via product specific technology (View Composer for View and Provisioning Server for XenDesktop)
    3. Connect to provisioned Desktops

    And the Madness Begins

    We shipped and set up the server rack containing all the lab equipment as well as the workstations to the event the day before Madness. Getting a full rack shipped and powered at a site not built for it can have its challenges, luckily we have some really talented and knowledgeable people at Varrow that can navigate these issues. Madness was also the first chance we had to stress test the labs with a full user load. Later that evening we pulled all hands on deck and sent them all through the labs, there were a mix of technical and non technical Varrowites in the room pounding away at the labs. We were seeing workloads in excess of 10-15000 IOPs and thing were running pretty smooth….until I decided to start making changes by checking out the Citrix lab vApp from the vCloud catalog to make some necessary changes….

     This graph on the right should illustrate what happened when I kicked off the Add to Cloud task and when it ended, it crushed the storage kicking the latency to ludicrious levels. So the most important thing when running a lab was to keep Phillip away from the keyboard. You can thank Jason Nash for that screenshot and the attached commentary.

    The labs were a huge team effort, special shout out to everyone who helped us before, during and after the event, Dave Lawrence, Jason Nash, Bill Gurling, Art Harris, Tracy Wetherington, Thomas Brown, Jason Girard, Jeremy Gillow and everyone else at Varrow that helped out with the labs.

    A lot of work goes into putting together the labs and all of the labs, both Varrow Hosted EUC labs as well as the VMware hosted labs were well received and attended at Madness. I hope to be a part of the lab team next year, we were already talking about ways to expand the labs next year and how to expand the products we demonstrate and more. Can’t wait until next year, see you then at Varrow Madness 2014.


  • Varrow Madness Part 1: General Thoughts on Awesomeness

    Varrow Madness is an annual conference put together for our customers and greater community centered around March Madness, focused on sharing ideas and knowledge, and its a free event. This was my first Madness and and I have now been with Varrow close to nine months and still loving it.  I’m amazed and stunned by this event my compatriots put together at Varrow along with the help of their partners. A lot of work goes into these conferences, more than I even imagined as a consumer of these events in the past. it really makes me appreciate the work that goes into the other events done out there, big and small, and especially the people that put them together.

    Varrow Madness is quite a show and if you get the chance next year, it’s an absolute must attend.

    I was going to write a short post following Madness but I have decided that Varrow Madness is just way too big and awesome to be contained in a single post so I will break it up into three posts.

    • Varrow Madness Part 1: General Thoughts on Awesomeness
    • Varrow Madness Part 2: Citrix Provisioning Server Implementation + Best Practices
    • Varrow Madness Part 3: Varrow Hosted EUC Labs

    Madness kicked off with Jeremiah Cook, our CEO and co-founder as well as a cultural leader at Varrow as well as our resident leader rapper. Jeremiah welcomed everyone to Varrow Madness followed by a great video about BYOB, Bring your own Bots inspired in order to introduce a great performance from iLuminate, the same group that has done performances on America‘s got Talent and other venues. I also learned that Jason Nash has Android devices and loves to look up funny cat pictures on the internet.

    Dan Weiss our Chief Operations Officer and co-founder joined Jeremiah and stage and talked about the sheer volume of events we had planned for the day and his rad dance movies. We also announced the winner of the Varrow Innovation Contest 2012, the winner recieved a free pass to the Vendor Conference of their choice. There were many submissions and I am sure it was exteremly hard to choose the finalist much less the actual winner, all of the submissions were fantastic.

    Check out each of the submissions below and congratulations to Alamance Regional Medical Center for winning the contest this year.

    Alamance Regional Medical Center

    Alamance Regional Medical Center participated in the Varrow Innovation contest and submitted their custom built Single-Sign-On solution that allows badge tap-n-go access for Citrix Xenapp. Alamance Regional was the grand prize winner of the Varrow Innovation Contest at Varrow Madness 2013. 

    American National Bank

    Varrow Customer Testimonial – American National Bank
    Finalist in 2013 Varrow Innovation Contest for their great work in running Active Active Datacenters with EMC Recoverpoint and VMware Site Recovery Manager

    Northern Hospital Surry County

    Varrow Customer Testimonial – Northern Hospital Surry County
    Finalist in 2013 Varrow Innovation Contest for their great work in running Active Active Datacenters with EMC VNX and VPLEX and Vmware vSphere

    Jesse Lipson, Citrix Sharefile VP and GM did the morning keynote. After the keynote, the day was packed with technical sessions from some of the industries greatest minds and I was there too. I believe there were 46 sessions for people to choose from broken up over four sessions, two in the morning and two in the afternoon. You can still find the agenda on our Varrow Madness microsite.

    I missed most of the afternoon keynote due to my work on the Varrow labs. The afternoon session kicked off with another performance from iLuminate and a keynote from VCE, President of VCE. I heard it was a great speech and plan to back and watch the recording when I have a free opportunity.

    Next year i hope we can record all sessions and provide those online as a learning tool. I know that I wanted to attend many sessions and was unable due to my other duties at the conference. One thing i heard from several folks (and I think its a good problem to have) is the difficulty in picking which session to pick.

    I spent most of the day in the Varrow Hosted Labs. The labs for both VMware View and Citrix XenDesktop were well received and attended, I will talk more about that in my upcoming Lab Post.

    In the next few posts, i will talk about my session on Citrix Provisioning Server and the Varrow Hosted labs. As always I welcome any comments that you may have and any questions.


  • VMware View 5.1

    VMware announced today the new features that will be in VMware View 5.1 This release has several new features worth noting.

    • View Storage Accelerator, already announced for View 5.0 and lifted at the last moment, this feature is borrowed from vSphere’s Content Based Read Cache (CBRC) and basically caches frequently used disk blocks into VDI host’s RAM avoiding frequent reading of the same informations from central storage.
    • View Persona Management is now extended to physical machine with the main purpose of VDI migrations or OSes migrations.
    • vCenter Operations (vCOPs) Manager for View is a new version, optimized for virtual desktop deployment, that provides end-to-end realtime monitoring of desktop and users. Already announced at VMworld Europe last year now includes a very requested feature: the ability to monitor PCoIP performance


    Also announced in this release are the following

    • USB Enhancements We have reworked the USB redirect feature for the Windows client.  The new USB feature no longer requires device driver to be installed on the client side. A generic USB arbitrator is implemented on the client side, while a proper USB hub is implemented in the agent. This allows VMware View to support a much broader range of USB devices while supporting fine-grained remote device policy (e.g. enable/disable mass storage file copy) even on multi-function USB devices.
    • RADIUS Support Based on customer feedback, we’ve extended the security authentication support in VMware View to other two-factor authentication vendors leveraging a RADIUS client in the View 5.1 Connection Server. This gives you more choice when implementing single sign-on or security tokens into your virtual desktops.
    • Continued PCoIP Enhancements We also continuously strive to enhance the PCoIP remote protocol following the significant progress made in version 5.0. We realize that optimal remote protocol performance cannot be achieved with code improvement alone. To help our customers make the right choice in protocol with proper performance tuning, we published a white paper comparing the tuning and test results of all state-of-the-art remote protocols:

    Improvements have also been made in how VMware View 5.1 scales out for larger deployments.

    • Increased scale in NFS attached clusters — now you can scale your VMware vSphere clusters up to 32 ESXi hosts
    • Reduce storages costs with View Storage Accelerator — combine VMware View 5.1 with VMware vSphere 5 and substantially optimize read performance using in-memory cache of commonly read blocks — totally transparent to the guest OS
    • Standalone View Composer Server — VMware View Composer can now be installed on its own server, opening up several new capabilities

    To begin understanding large scale VMware View designs, you need the basic building blocks found in all successful VMware View implementations. The three key building blocks are the View Pod, View Block, and Management Pod. These are logical objects, but they do have some tangible boundaries.

    To learn more about these components and scalability improvements, please check out the Demystifying VMware View Large Scale Designs