Citrix Access via Chrome is Broken

This post explains Google Chrome functionality that can negatively impact the access to any Citrix environment.

After clicking on a published application or desktop icon in StoreFront using Chrome–nothing happens.


After logging on to StoreFront using Chrome, it never thinks Citrix Receiver is installed and offers it to me to download before I get to see my icons.


You have a warning to, “Unblock the Citrix plug-in.”

1) Re-enable the plugin using CTX137141.  This workaround will end in November 2015 when Google permanently disables NPAPI.
2) Customize StoreFront to remove the prompt to download Receiver with customized code.
3) Customize StoreFront with a link to download Receiver with customized code.
4) Enable a user setting to always open .ica files using CTX136578.
5) Use another browser not affected by the Chrome changes.

Back in November 2014, Google announced it would remove NPAPI support from Chrome.  They are making this change to “improve security, speed, and stability” of the browser.   In April 2105, they will change Chrome’s default settings to disable NPAPI before removing it entirely in September of 2015.

What does this mean for my Citrix users who use Chrome?

Receiver detection.  The NPAPI plugin that Receiver (Windows and Mac) installs allows Receiver for Web (aka StoreFront) to detect if Citrix Receiver is or is not installed.  Without this plugin, it assumes you do not have Receiver and will offer it for you to download and install.  As an aside, you may have noticed that Internet Explorer has an ActiveX control that does the same thing.  If your user does not have Receiver then they can not launch their Citrix application or desktop, so this is a good thing. If your user is already running Receiver but gets offered the Receiver download this will be confusing and could potentially be a bad thing.

Launching applications and desktops.   Let me explain what should happen when you click on the icon for, say, Outlook 2010 in StoreFront (aka Receiver for Web).  StoreFront will talk to a delivery controller to figure out what machine is hosting Outlook 2010 and has the lowest load.  StoreFront will then offer you a .ica file to download.  If you have the plugin, Windows will know that this is a configuration file that should be opened by Receiver.  Receiver will then connect you to your application.  This all happens quickly and seamless making it seem like Outlook 2010 launches immediately.

Without the plugin, you will download an .ica file but Outlook 2010 will not open until you click it.  Chrome does have the option (the arrow on the downloaded file) to “Always open files of this type” as shown in CTX136578.


Brian Olsen @sagelikebrian

vGPU, vSGA, vDGA, software – Why do I care?

I want to take a moment and talk for a second about an oft mentioned but little understood new feature of vSphere 6.  Specifically NVIDIA’s vGPU technology.

First, we need to know that vGPU is a feature of vSphere Enterprise Plus edition; which means it’s also included in vSphere for Desktops.  But if this sounds like something you need and you’re running Standard or Enterprise, now might be a good time think about upgrading, and taking advantage of the trade up promotions.

Many folks think “I don’t need 3D for my environment.  We only run office.”  If that’s you, then please take a good close look at what your physical desktop’s GPU is doing while you run office 2013; especially PowerPoint.  Nearly every device sold since ~1992 has had some form of hardware based graphics acceleration.  No so your VM’s.   Software expects this.  Your users will demand it.

With that, let’s talk about what we’ve had for a while as relates to 3D with vSphere.  Understand you can take advantage of these features regardless of what form of Desktop or Application virtualization you choose to deploy because it’s a feature of the virtual machine.

No 3D Support – I mention this because it is an option.  You can configure a VM where 3D support is disabled.  Here an application that needs 3D either has to provide it’s own software rendering, or it will simply error out.  If you know your App doesn’t use any 3D rendering at all this is an option to ensure that no CPU time or memory is taken up trying to provide the support.  No vSphere drivers are required.

Software 3D – Ok, here we recognize that DirectX and OpenGL are part of the stack and that some applications are going to use them.  VMware builds support into their VGA driver (part of VMware Tools) that can render a subset of the API’s (DX9, OpenGL 1.2) in software, using the VM’s CPU.  This works for a set of apps that need a little 3D to run, and we aren’t concerned about the system CPU doing the work.  No hardware ties here as long as you can live with the limited API support and performance. No vSphere drivers are required.  No particular limits on how many VM’s can do this beyond running out of CPU.

vSGA – or Virtual Shared Graphics – In this mode the software implementation above gets a boost by putting a supported hardware graphics accelerator in to the vSphere host.  The API support is the same because it’s still the VMware VGA driver in the VM, but it hands off the rendering to an Xorg instance in the service console which in turn does the rendering on the physical card.   This mode does require a supported ESXi .vib driver, provided by the card manufacturer.   That means you can’t just use any card, but have to buy one specifically for your server and which has a driver.  NVIDIA and AMD provide these for their server centric GPU cards.  Upper bound of VM’s is determined by the amount of video memory you assign to the VM’s and the amount of memory on your card.

vDGA – or Virtual Dedicated Graphics – In this mode we do a PCI pass-through for a GPU card to a given virtual machine.  This means that the driver for the GPU resides inside the virtual machine and VMware’s VGA driver is not used.  This is a double (or tripple) edge sword.   Having the native driver in the VM ensures that the VM has the full power and compatibility of the card, including latest API’s supported by the driver (DX11, OpenGL 4, etc.).  But having the card assigned to single VM means no other VM’s can use it.  It also means that the VM can’t move off it’s host (no vMotion, no HA, no DRS). This binding between the PCI device and the VM also prevents you from using View Composer or XenDesktop’s MCS, though Citrix’s Provisioning Services (PVS) can be made to work.  So this gives great performance and unmatched compatibility but at a pretty significant cost.  It also means that we do not want a driver installed for ESXi since we’re only going to pass-through the device.  That means you can use pretty much any GPU you want.  You’re limit on how many VM’s per host is tied to how many cards you can squeeze into the box.

All of the above are available in vSphere 5.5, with most of it actually working under vSphere 5.1.   I’ve said if you care about your user experience you wanted to have vSGA as a minimum requirement and consider vDGA for anyone who’s running apps that clearly “need” 3D support.   Though vDGA’s downside has had a way of pushing it out of of high volume deployments.

Ok so what’s new?   The answer is NVIDIA vGPU.  The first thing to be aware of is that this is an NVIDIA technology, not VMware.  That means you won’t see vGPU supporting AMD (or anyone else’s) cards any time soon.  Those folks will need to come up with their own version.   NVIDIA also only supports this with their GRID cards (not GeForce or Quadro). So you’ve got to have the right card, in the right server.   Sorry, that’s how it is.  It’s only fair to mention that vGPU first came out for XenServer about two years ago, and came out for vSphere with vSphere 6.0.  So while it’s new to vSphere, it’s not exactly new to the market.

So what makes this different?   vGPU is a combination of an ESXi driver .vib and some additional services that make up the GRID manager.  This allows dynamic partitioning of GPU memory and works in concert with a GRID enabled driver in the virtual machine.   The end result is that the VM runs a native NVIDIA driver with full API support (DX11, OpenGL 4) and has direct access (no Xorg) to the GPU hardware, but is only allowed to use a defined portion of the GPU’s memory.  Shared  access to the GPU’s compute resources is governed by the GRID manager.   Net-Net is that you can get performance nearly identical to vDGA without the PCI pass-through and it’s accompanying downsides.  vMotion remains a ‘no’ but VMware HA and DRS do work.  Composer does work, and MCS works.  And, if you set your VM to use only 1/16th of the GPU’s memory then you have the potential to share the GPU amongst 16 virtual machines.  Set it to 1/2 or 1/4 and get more performance (more video RAM) but at a lower VM density.

So why does this matter?   It means we get performance and comparability for graphics applications (and PowerPoint!) and an awesome (as in better than physical) user experience while gaining back much of what drove us to the virtual environment in the first place.  No more choosing between a great experience and management methods, HA, and DR. Now we can have it all.

If you’re using graphics, you want vGPU!  And if you’re running Windows apps, you’re probably using graphics!

LoginVSI “VSISetup has stopped working”

One of the real challenges of testing a Virtual Desktop Infrastructure (VDI) is getting enough users to logon to the system to test it at the same time.  While load testing should be an essential part of validating a new system, scheduling three or four users to test applications can be challenging.   What if you would like to simulate the load of 25 or 50 users?  Often, due to the challenges of scheduling and man hours involved load testing gets ignored and you end up hoping for the best on go live day.

LoginVSI is an application that allows you to orchestrate as many users as you would like to test the system while having it all be automated.  Out of the box, LoginVSI will simulate a user logon and then perform a series of very real activities like surfing the web, reading emails, or editing spreadsheets.  This simulated workload progresses for about an hour per user.  This allows you, the administrator, to observe how resources are used and look for issues related to a stressed system.  LoginVSI can be set up to keep having new users logon until the system stops responding or crashes.  It then compiles the performance metrics and tells you the optimal amount of users the system can handle before the server starts to have poor response time.  They call this number the VSImax.

This article explains how to resolve the error, “VSISetup has stopped working” while setting up LoginVSI.  During the install process of the LoginVSI management console the setup program crashes shortly after starting it.


Add the .NET Framework 3.5.1 using the Add Features wizard.  Before continuing on with the LoginVSI install, run Windows Update and patch .NET.  This can be time consuming as there are lots of updates available and this may require a reboot or two.


While it is documented in the excellent install guide, it is easy to forget that the default install of Windows Server 2008 R2 does not have .NET 3.5.1 installed and it is required.

Brian Olsen @sagelikebrian

Lewan Achieves Cisco Master Collaboration and Master Cloud & Managed Service Designations

In addition to successfully passing the requirements and audit to re-certify as a Cisco Gold Partner, Lewan Technology is honored to announce achievement of two Master Specializations: Collaboration and Cloud & Managed Services.


“These Master level certifications are the absolute highest achievement that a Cisco partner can attain in any technology area. There are only 43 partners in the United States that hold these two certifications,” explained Ray Dean, Lewan’s Director of Networking and Communications. “This honor recognizes the great engineering teams and processes we have in place, as well as our commitment to ongoing customer satisfaction and solution integration.”

Cisco Gold Partner Certification

Gold Certification offers the broadest range of expertise across high growth market opportunities known as architecture plays – Enterprise Networking, Security, Collaboration, Data Center Virtualization and SP Technology. Gold Certified Partners have also integrated the deepest level of Cisco Lifecycle Services expertise into their offerings and demonstrate a measurably high level of customer satisfaction.

Lewan has been a Cisco Gold Certified Partner since 2005.

Cisco Master Collaboration Specialization

The Master Collaboration Specialization demonstrates the highest level of expertise attainable with Cisco collaboration solutions.

Master Collaboration Specialized Partners represent an elite partner community that has met the most rigorous certification requirements and are therefore the best for complex deliveries. Lewan demonstrated the ability to design and deploy solutions that conform to Cisco validated designs. In addition, Lewan showed current examples of successful projects in which we integrated multiple solutions and technologies to support client needs. No other Cisco specialization or certification demands such extensive proof of the partner’s design and implementation capabilities.

Cisco Cloud & Managed Services Master Service Provider

The Cloud and Managed Services Program (CMSP) helps partners respond to their customers’ business needs with innovative and validated Cisco Powered services. The exclusive Master Cloud and Managed Services designation recognizes partners at the highest level of achievement, competency and capabilities.

Lewan is recognized as a partner uniquely positioned to offer best-in-class Cisco Powered services and Cloud Managed services which are validated to insure security, reliability, and performance.

Basic Network Virtualization Components Explained

I found this great article about different network virtualization industry concepts that are incorporated into networks today.  I thought I share this post from Henk Steneker that helps explain some of the virtualization technology.

What is virtualization?

With Virtualization a physical device or a pool of physical devices is divided into several virtual or logical devices.

What is a VLAN?

A Virtual Local Area Network (VLAN) occurs when a physical LAN is divided into several LANs.



The network diagram above shows two switches that are connected with a trunk. Both switches have an access port in VLAN 101 and VLAN 102. Ethernet frames of VLAN 101 that are transmitted to the other switch are provided with a VLAN 101 tag on the trunk connection. The receiving switch removes the tag and passes the frames on to the access port of VLAN 1.

What is Virtual Routing and Forwarding?

With Virtual Routing and Forwarding (VRF) a physical router is divided into several virtual routers.


The VRFs can be separated completely from each other and the same subnet can be used in several VRFs. VRF routers communicate with each other via an address family that works with a Route Distinguisher (RD) and an IP address.

What is Port Channel?

Port Channel (PC) is the combining of several physical links into one virtual link.


Another name for this is Ether Channel (EC) or Link Aggregation Group (LAG). If one of the connected links fail, the virtual link continuous to work. You can apply PC or LAG for ports on routers (Layer 3 PC) or switches (Layer 2 PC). Because the switch sees a PC as one virtual link, a broadcast storm cannot occur.

You can apply Port Channel for redundancy of for load balancing between physical links.

What is a Virtual Switching System?

With a Virtual Switching System (VSS), two physical switches (for example the primary and the secondary switch) are combined into one virtual switch.



The virtual switch has one management plane and one control plane. In the example above this is the case with the two distribution switches that are connected with a Virtual Switch Link (VSL). Both the access switches see one logical distribution switch. Because there is a Port Channel between the access switch and the distribution switch, the Spanning Tree Protocol is not needed. VSS can be used with the Cisco Switch series 4500 and 6500.

What is Multichassis Ether Channel?

The physical ports of an Ether Channel must be connected on one physical device or on one virtual device on every side.

But if two physical devices support Multichassis Ether Channel (MEC) it also is allowed. The other side of the Ether Channel then sees one virtual device. Another name for this is Virtual Port Channel (vPC) or Multichassis LAG.

vPC can be applied with Cisco switches of the series Nexus 5000 and 7000. Both the switches have their own management plane and control plane.

What is a Virtual Device Context?

With a Virtual Device Context (VDC) a physical switch can be divided into several switches. You can divide a primary Nexus switch into a primary Core VDC and primary Aggregation VDC.

What is a Virtual Storage Area Network?

A Virtual Storage Area Network (VSAN) occurs by combining several SANs from a pool SANs. On their term, these can be divided into several VSANs.

What is a Virtual Machine?

A Virtual Machine (VM) occurs by combining several physical servers into one Virtual Server. On their term the Virtual Server can be divided into several Virtual Machines (VM).

Original Post can be found here: