Hello, thanks to all who attended my session. For those who have asked the session presentation can be downloaded here – download presentation.
Kenneth is Speaking at BriForum Denver
I sat down with one of Lewan’s Solution Architects, Kenneth Fingerlos, to discuss his upcoming speaking engagement at the BriForum conference on July 20th. Our brief conversation covered the details of his session, “vSGA, vDGA, vGPU, and Software – When and Why“, his background in the industry, and what gets him excited in the technology space right now.
Me: Kenneth, can you tell me a little bit about your industry experience?
Kenneth: So, after college I took a left turn in my career path and went into corporate IT for ten years. Various positions: desktop management, server management, data center. Various kinds of things. After ten years of that I decided I didn’t care for IT management and tried to correct the course change and landed in consulting. I’ve been doing IT consulting for about the last ten years around storage, data management, virtualization of various types, and building up my skill sets trying to help customers solve problems.
Me: Great, great. So have you been to BriForum before?
Kenneth: I have not been to BriForum. This will be my first year.
Me: What attracted you to BriForum?
Kenneth: I’m excited. The whole idea of a conference that has some size to it and is established that is not tied to a specific vendor is just exciting, right? You go to a Cisco conference and it is all about what is the latest widget from Cisco. Cisco can do no wrong. You find the same thing if you go to, you know, Dell World. Dell is perfect. Whatever Dell has got going is awesome and whatever everyone else has is garbage. BriForum excites me because it is everybody. It is a marketing company–a media company that puts on the conference as opposed to a product manufacturer.
Me: So what will you be discussing at BriForum?
Kenneth: I’m discussing a topic that is near and dear to my heart which is the idea of virtualized graphics. Taking things we do everyday in the physical world with physical PCs and trying to bring this into this virtualized environment. Things like disaster recovery, security, flexibility. You know, the physical world is pretty restricted. Graphics have always been one of these things that is hard and is difficult. Technology is evolving and has advanced dramatically over the last couple of years in terms of what we can do. But there is also a lot of complexity and a lot of information and I find my customers have a lot of confusion about what they can and can’t do. What works, what doesn’t work. My session is all about trying to bring some clarity to that area.
Me: Ok, so I am going to open this up a little bit and say maybe don’t limit this to just the enterprise world but what is the technology you are most excited about right now?
Kenneth: The technology I am most excited about right now….I think the stuff that is most exciting is really this idea of graphics virtualization. I mean, so many things go into a user experience, right? And all of the traditional things that you think about: servers, storage, memory, CPUs–graphics is part of that. Remoting protocols, right? What’s going on with actually getting that content delivered to a user. Networking, right? 3G, 4G networks and starting to think about what’s next, what’s beyond 4G. These are huge enablers to let people consume and develop content in ways that have never been envisioned before. Letting you take that stuff to the cloud, to the remote data center, and access it from anywhere. I’ve been sitting on top of a mountain in my 4×4 holding a virtual desktop, just because I’m a geek and into this stuff, but yes–I can access that app, whatever it is, from a mountain top in the middle of nowhere. That’s cool stuff. And it’s all about enabling people to work and function in ways they’ve never been able to before. That excites me.
Me: Very cool. Well, looking forward to seeing your session at BriForum! Until next time.
As I wrote about earlier, BriForum Comes to Denver, and I am excited to have such a great event in my backyard. If you are going to be at BriForum or just have general questions about Denver, reach out to either @kfingerlos or myself (@sagelikebrian) and let’s catch up.
Brian @sagelikebrian
Citrix Default Printer Won’t Retain
The Windows default printer is a magical thing. This is the printer that is selected by default when you print in an application. Depending on your particular printing workflow this may be the only printer you ever use. Some applications have a quick print functionality that sends a print job to the default printer using default settings and no prompts (for example, portrait orientation and a single copy). To make a printer your default, simply right-click it and select default printer.
When you use Citrix, a Windows default printer is still a Windows default printer. The difference is that Citrix has administrative policies to help you decide what will be the default.
I recently ran into an issue with a new XenDesktop v7.6 environment where users could select a new default printer using the method above but the next day when they logged on to their desktop it was set back to Microsoft XPS Document Writer. A quick note on Microsoft XPS Document Writer, as you may have noticed it installed on your computer, it is really a print-to-file driver Microsoft created to allow you to save print output in the Microsoft XML Paper Specification. If you have never used it, do not feel bad, it is more likely you have used the immensely popular PDF format made popular by Adobe before becoming an open standard in 2008.
By default, the user’s current printer is used as the default printer for the session. For example, my laptop’s default printer is HP Deskjet 3520 series (Network). When I logon to my Citrix desktop it will redirect the laptop printers into the session including my default printer. That is ideal for a laptop user.
For my next example, I am using a thin client that does not have a default printer because it does not have an OS. It can only connect to a Citrix desktop. When I logon from the thin client it will not see a default printer so it will make the first printer on the Citrix desktop the default. Often times this ends up being the Microsoft XPS Document Writer instead of the HP Deskjet 3520 series (Network).
At first, the issue seemed related to a Windows user profile issue since everyone lost their setting from one logon to the next. After verifying that other Windows user settings were being retained (i.e. wallpaper, Office settings, and the printer mappings themselves), I moved on to Citrix print policies. There is a specific policy I found interesting:
Default printer
Looking closer at the policy it defaults to “Set default printer to the client’s main printer”. Most of the time this will result in using the default printer on the user’s endpoint (e.g. laptop or desktop). If that endpoint is a thin client or even an iPad it will not have a default printer to redirect so you will end up with the first printer in the session.
I made a new policy and set it to “Do not adjust the user’s default printer” and gave it a higher priority then the others and assigned it to my test user account.
I then ran a gpupdate on each test worker to verify it had the new policy. To test, I logged on with the test user, changed my default printer to a network printer. I then logged out and put that test server in maintenance mode ensuring my next logon would go to the other test server. Success, my new default printer was retained. To be extra sure there was not anything cached locally, I rebooted both non-persistent workers and logged in again. Success. The final steps were to make the policy apply to more users and have them test before rolling it out to everyone on both the test and production workers.
Printing is rarely thought of as complicated but it always is. If you are running into a similar issue then this policy change could be your answer.
Brian Olsen @sagelikebrian
Microsoft Excel Not Enough Memory or Disk Space
During a recent Deployment of XenApp 7.6 on Windows Server 2012 R2 when users ran an application that exported data to Excel they kept getting this error.
Checking the XenApp session host server which was sized at 2vCPU and 8GB of RAM there was plenty of memory available as there was only one users logged into the server. Launching Excel then opening a workbook was fine and did not result in the error and after patching Office 2010 to the latest patch the error still persisted. After investigating there was no reason why this error would appear.
It would appear that this is a bug in Excel 2010 and Excel 2013 running on Windows Server 2012 R2 and excluding AppData\Local with Citrix Profile Management which is done to reduce the size of profile. With this configured the Cache folder ends up not having allocated enough space, the folder is part of the User Shell Folders in their profile.
The solution. Redirect the user Cache directory to C:WindowsTemp, but doing so without the need to load the hive and hack the default profile’s NTUSER.dat.
First assign Users Modify rights to C:WindowsTemp, otherwise they will not have access and this will not work.
Create a GPO Preference Registry Collection named something descriptive such as Excel Cache Directory
Create a new Registry Item pointing to: HKEY_CURRENT_USERSOFTWAREMICROSOFTWINDOWSCURRENTVERSIONEXPLORERUSER SHELL FOLDERS
The Value Should be Cache
The Data Should be C:\WINDOWS\TEMP
The Type Should be a REG_EXPAND_SZ
Allow for the GPO to replicate and run a GPUPDATE /FORCE and test and you should no longer see the error.
The next time you encounter this issue give this a try. For more information please leave a comment.
Johnny Ma @mrjohnnyma
BriForum Comes to Denver
IT conferences are a great way to catch up on what is new, take classes, and network with peers in the industry. I have been lucky enough to attend great shows like Citrix Summit and Synergy as well as VMware VMworld over the years. The conference for me that always fell just out of reach was BriForum. This year it is all going to change. I am more than a little excited that one of the world’s premier IT conferences has chosen Denver, Colorado for this year’s US location. BriForum is an independent conference that provides vendor-neutral perspective on current and emerging technologies and services.
Check out this year’s list of sessions:
http://www.brianmadden.com/blogs/gabeknuth/archive/2015/03/09/check-out-the-list-of-sessions-for-briforum-denver-2015-july-20-22.aspx
If you have a keen eye, you may have noticed a third of the way down the list a special session, “vSGA, vDGA, vGPU, and Software – When and Why“, being presented by Lewan’s very own expert speaker Kenneth Fingerlos (@kfingerlos).
Kenneth will be talking about the new graphics intensive workloads that are possible in VDI thanks to highend GPUs from NVIDIA. He will specifically be digging into the different methods you can use to virtualize the GPU and when and why you would want to choose each method. I promise you this will be a deep technical dive preparing you for your next graphics intensive virtual desktop project.
Check out the Lewan IT Solutions Technical Blog for more great technical information from Kenneth.
Come join Lewan at BriForum 2015 if you would like to learn more about solutions from Citrix, VMware, Microsoft and much more.
Brian Olsen (@sagelikebrian)
Cisco to Secure the IoE (Internet of Everything) by building Security accross their products
Cisco says it is adding more sensors to network devices to increase visibility, more control points to strengthen enforcement, and pervasive threat protection to reduce time-to-detection and time-to-response. The plan includes:
- Endpoints: Customers using the Cisco AnyConnect 4.1 VPN client now can deploy threat protection to VPN-enabled endpoints to guard against advanced malware
- Campus and Branch: FirePOWER Services solutions for Cisco Integrated Services Routers (ISR) provides centrally managed intrusion prevention system and advanced malware protection at the branch office where dedicated security appliances may not be feasible
- Network as a Sensor and Enforcer: Cisco says it has embedded multiple security technologies into the network infrastructure to provide threat visibility to identify users and devices associated with anomalies, threats and misuse of networks and applications. New capabilities include broader integration between Cisco’s Identity Services Engine (ISE) and Lancope StealthWatch to allow enterprises to identify threat vectors based on ISE’s context of who, what, where, when and how users and devices are connected and access network resources.
StealthWatch can also now block suspicious network devices by initiating segmentation changes in response to identified malicious activity. ISE can then modify access policies for Cisco routers, switches, and wireless LAN controllers embedded with Cisco’s TrustSec role-based technology.
Cisco has also added NetFlow monitoring to its UCS servers give customers greater visibility into network traffic flow patterns and threat intelligence information in the data center.
Other aspects of the plan include Hosted Identity Services, which is designed to provide a cloud-delivered service for the Cisco Identity Services Engine security policy platform. The new hosted service provides role-based, context-aware identity enforcement of users and devices permitted on the network, Cisco says.
The strategy also includes a pxGrid ecosystem of 11 new partners that plan to develop products for cloud security and network/application performance management for Cisco’s pxGrid security context information exchange fabric. The fabric enables security platforms to share information to better detect and mitigate threats.
The company is also investing heavily in integrating its ASA firewalls with its Application Centric Infrastructure SDN,
More information can be found at http://www.networkworld.com/article/2932547/security0/cisco-plans-to-embed-security-everywhere.html
Marlins Score Big with Citrix
It seems like every other week there is an IT security breach that makes the news. Many of these hacks score credit card information that can immediately be used or sold. Recently there have been allegations that members of the St. Louis Cardinals hacked into the Houston Astros’ system to gather information on players.
New York Times – Cardinals Investigated for Hacking Into Astros’ Database
Kansas City Star – Astros GM Luhnow disputes details related to Cardinals hacking probe
At face value, it seems shocking to hear about hacking in Major League Baseball. There was a time when America’s favorite pastime was not considered high tech. It was the boys of summer playing a great game and the best team won. In this Moneyball era of baseball statistics, numbers and data win big.
You don’t have to believe me, just ask Brad Pitt.
As soon as I heard the news it made me think of what the Marlins are doing with technology from Citrix.
The Marlins are scoring two big wins with Citrix. First, they are doing things that have never before been possible and making a better experience for their customers. Second, they have a focus on security that has kept their IT department out of national headlines while protecting their team and intellectual property. It is hard to put a price on the total package.
We should not give all the credit to the Marlins’ IT foresight. After all, the Simpsons predicted this way back in 1999.
Brian Olsen @sagelikebrian
Lewan Technology Named to CRN Solution Provider 500
Lewan has been recognized on CRN’s 2015 Solution Provider 500 list, as one of North America’s largest technology integrators, managed service provider and IT consultant.
“We are proud to be selected at a top provider again. We strive to exceed our customers’ expectations with our solutions and professional and managed services, delivered by our exemplary sales and customer service teams,” said Scott Pelletier, CTO at Lewan Technology.
From CRN:
This annual list, spanning eight categories, from hardware and software sales to managed IT services, recognizes the top revenue-generating technology integrators, MSPs and IT consultants in North America. Solution providers are ranked based on revenue, determined by product and services sales during 2014.
“The companies represented here are truly dedicated to the needs of customers today. With an evolving IT landscape, this prestigious list serves as a valuable industry resource to help vendors navigate the solution provider community and identify the best partners for their business,” said Robert Faletra, CEO, The Channel Company. “We congratulate the featured solution providers for their forward-thinking approach to solutions sales and look ahead to their continued success.”
About The Channel Company
The Channel Company, with established brands including CRN®, XChange® Events, IPED® and SharedVue®, is the channel community’s trusted authority for growth and innovation. For more than three decades, we have leveraged our proven and leading-edge platforms to deliver prescriptive sales and marketing solutions for the technology channel. The Channel Company provides Communication, Recruitment, Engagement, Enablement, Demand Generation and Intelligence services to drive technology partnerships. Learn more at http://www.thechannelcompany.com.
Cisco Enhances SDN Strategy and Offerings Across the Entire Nexus Portfolio with new VTS Automation Solution
Interest in Software Defined Networking (SDN) continues to grow through the ability to make networks more programmable, flexible and agile. This is accomplished by accelerating application deployment and management, simplifying automating network operations and creating a more responsive IT model.
Cisco is extending its leadership in SDN and Data Center Automation solutions with the announcement today of Cisco Virtual Topology System (VTS), which improves IT automation and optimizes cloud networks across the entire Nexus switching portfolio. Cisco VTS focuses on the management and automation ofVXLAN-based overlay networks, a critical foundation for both enterprise private clouds and service providers. The announcement of the VTS overlay management system follows on Cisco’s announcement earlier this year supporting the EVPN VXLAN standard, which underlies the VTS solution.
Cisco VTS extends the Cisco SDN strategy and portfolio, which includes Cisco Application Centric Infrastructure (ACI), as well Cisco’s programmable NX-OS platforms, to a broader market and for additional use cases, which includes our massive installed base of Nexus 2000-7000 products, and to customers whose primary SDN challenge is in the automation, management and ongoing optimization of their virtual overlay infrastructure. With support for the EVPN VXLAN standard, VTS furthers Cisco’s commitment to open SDN standards, and increases interoperability in heterogeneous switching environments, with third-party controllers, and with cloud automation tools that sit on top of the open northbound API’s of the VTS controller.
Cisco is committed to delivering this degree of interoperability and integration with multi-vendor ecosystems for all of its SDN architectures, as we have previously exhibited with ACI, with the contributions we have made on Group Based Policies (GBP) to open source communities, and with our own Open SDN Controllerbased on Open Daylight. With VTS, we now offer the broadest range of SDN approaches across the broadest range of platforms and the broadest ecosystem of partners in the industry.
Programmability | Automation | Policy
Programmable Networks: With Nexus and NX-OS Programmability across the entire portfolio, we deliver value to customers deploying a DevOps model for automating network configuration and management. These customers are able to leverage the same toolsets (such as existing Linux utilities) to manage their compute and networks in a consistent operational model. We continue to modernize the Nexus operating system and enhance the existing NX-APIs by adding secure SDK with native Linux packaging support, additional OpenFlow support and delivering an object driven programming model. This enables speed and efficiency when programming the network while also securely deploying 3rd party applications for enhanced monitoring and visibility such as Splunk, Nagios and tcollector natively on the network.
Programmable Fabrics: Overlay networks provide the foundation for scalable multi-tenant cloud networks. VXLAN, developed by Cisco along with other virtualization platform vendors, has emerged as the most widely-adopted multi-vendor overlay technology. In order to advance this technology further, a scalable and standards-based control plane mechanism such as BGP EVPN is required. Using BGP EVPN as a control-plane protocol for VXLAN optimizes forwarding and eliminates the need for inefficient flood-and-learn approaches while improving scale. It also facilitates large scale deployments of overlay networks by removing complexity, fosters higher interoperability through open standard control plane solutions, and access to a wider range of cloud management platforms.
Application Centric Policy: Cisco will be able to offer the most complete solution on the Nexus 9000 series whether it is ACI policy-based automation or BGP EVPN-based overlay management. Customers will now have a choice for running an EVPN VXLAN controller in a traditional Nexus 9000 “standalone” mode, or to leverage ACI and the APIC controller with the full ACI application policy model, and integrated overlay and physical network visibility, telemetry and health scores. VTS will support EVPN VXLAN technology across a range of topologies (spine-leaf, three-tier aggregation, full mesh) with the full Nexus portfolio, as well as interoperate with a wide range of Top of Rack (ToR) switches and WAN equipment.
VTS Design and Architecture
The Cisco Virtual Topology System (VTS) is an cloud/overlay SDN solution that provides Layer 2 and Layer 3 connectivity to tenant, router and service VMs. Cisco VTS is designed to address the multi-tenant connectivity requirements of virtualized hosts, as well as bare metal servers. VTS is comprised of the Virtual Topology Controller (VTC), the centralized management and control system, and the Virtual Topology Forwarder (VTF), the host-side virtual networking component and VXLAN tunnel endpoint. Together they implement the controller and forwarding functionality in an SDN context.
The Cisco VTS solution is designed to be hypervisor agnostic. Cisco VTS supports both VMware ESXihypervisor and KVM on RedHat Linux. VTS will support integration with OpenStack and VMware vCenter for integration with other data center and cloud infrastructure automation. VTS also integrates with Cisco Prime Data Center Networking Manager (DCNM) for underlay management. The Cisco VTC, the VTS controller component, will provide a REST-based Northbound API for integration into other systems.
Cisco VTS will be available in August. 2015
Source of Blog post was from Gary Kinghorn @ http://blogs.cisco.com/datacenter/vts
What the heck is an IOP (and why do I care)? Disk math, and does it matter?
I’ll start by answering the title question first. IOP is an acronym standing for Input Output Operation. It does seem like it should be IOO, but that’s just not the way it worked out.
A related bit of trivia, we generally talk either about total IOPs for a given task, or we talk about a rate – IOPs per second typically, noted as IOPS.
With that the Wikipedia portion of today’s discussion is complete. Let’s move on to why we care about IOPs.
Most frequently the topic comes up in terms of either measuring a disk system’s performance, or attempting to size a disk system for a specific workload or loads. We want to know not how much throughput a given system needs, but how many discrete reads and writes it’s going to generate in a given unit of time.
The reason we want to know is that a given storage system has a discrete number of IOPS it can deliver. You can read my article on Disk Physics to get a better understanding of why.
In the old days this was mostly a math problem. We knew that a 7.2K drive would deliver 60-80 IOPS, a 10K drive would deliver 100-120, and a 15K drive would give us 120-150 IOPS. We also knew that we had to deal with RAID penalties associated with write operations to storage arrays. Typical values were 1 IO penalty for RAID1 and 10, and 4 for RAID5 and 50.
The idea here was fairly simple. If I needed a disk subsystem that would give me 1500 IOPS read, then I needed 10 15K drives to do that (1500/150 = 10). If I needed 1500 IOPS write in a RAID10 comfit, then I needed 20 15K drives ((1500 + (1500 * 1))/150 = 20). The same 1500 IOPS write in a RAID5 config took more spindles because of the RAID penalties but it was also easily calculated as 50 drives ((1500+(1500*4))/150 = 50).
That last by the way is how come database vendors have always asked that their logs be placed on RAID1 or RAID10 storage. When writing to RAID5 storage it’s necessary to read the entire RAID stripe, recalculate, and re-write it. Thus the 4 penalties.
The math got a bit more complicated when we had a mix of reads and writes. What we have to do there is to calculate the read and write portions separately and then add the result together. Suppose we had a workload of 3000 IOPS, where 50% was read and 50% was write. Thus we’d have 1500 IOPS read and 1500 IOPS write. On a RAID10 system we’d need 10 drives to satisfy the reads, and 20 drives to satisfy the writes. A total of 30 drives then is needed to satisfy the whole 3000 IOPS workload.
Those were the old days when we could pretty easily look at a disk subsystem and calculate how much performance it should deliver. Modern disks however have changed the rules some.
How did they change the rules? Well, basically they have a way of making IOPs disappear.
Consider for a moment NetApp’s WAFL configuration. WAFL works by caching write operations to an NVRAM on the controller, and telling the application that the IO is complete. No physical IO operation has actually taken place. Now, thus far this sounds like a write back cache, but here’s the difference. WAFL doesn’t just perform a “lazy write” of the cached data, it actually waits until it has a series of writes which need to be written to the physical disks, and then it looks for a place on disk where it can write all of those blocks down at once in sequence. Thereby taking perhaps 4 or 10 (or more) physical IOPs and combining them into one. WAFL actually takes this a step further by looking for places on disk where it doesn’t have to read the stripe before writing it in an attempt to also avoid paying the RAID write penalties. This last is the reason WAFL performance degrades as the disk array becomes very full; it becomes harder to find unused space.
Another example of vanishing IOPs is Nimble’s CASL filesystem that expands on what WAFL does by doing two additional things. First, it compresses all the data as it comes into the array, which further reduces the number of IOPs necessary to write the data. Second CASL is based around the idea of having very large FLASH memory based caches so that physical IOPs to spinning disk can be avoided for reads. The net of this being that write IOPs are reduced and read IOPs are nearly eliminated completely. In testing done by Dan Brinkman while he was at Lewan, a Nimble array with 12 7.2K disks was clocked at over 18,000 IOPS. We know that the physical disks were capable of no more than 960 IOPS (80 * 12 = 960). This is a testament to how effective CASL is at reducing physical IOPs.
A third example of IO reduction is what Atlantis Computing does in their Ilio and USX products when dealing with persistent data (in-memory volumes is a topic for another day). Atlantis takes the idea of caching and compression further still by adding inline data deduplication, wherein data is evaluated before being written to determine if an identical block has already been written. If it’s an identical block then no physical write is actually performed for the block, and the Filesystem pointer for that block is merely updated to reflect an additional reference. Atlantis caches the data (reads and writes) in RAM or on FLASH as well to further reduce physical IO operations.
The extreme case of this is the all-flash storage array (or subsystem), which is available from many vendors these days (Compellent, NetApp, Cisco, Atlantis, VMware vSAN, all offer all flash options and there are many more options as well). All flash arrays eliminate physical disk IO by eliminating the physical disks. They’ve made the FLASH cache tier so large that there is no longer any need to store the data on a spinning drive. There is still an upper bound for these arrays but it’s tied to controllers and bandwidth rather than the physics of the storage medium.
So what’s the net of all this?
The first part is that storage has gotten smarter and more efficient by making better use of CPU’s and memory. Letting them deliver higher performance and better data density with fewer spinning drives.
The second part of the answer is that the old-school disk math around how many IOPS you need and how many spindles (spinning disks) will be required is largely obsolete. Unless you’re building an old-school storage array or using internal disks in your server the storage is probably doing something to reduce and/or eliminate physical disk IOPs on your behalf. Making the idea that you can judge the performance of the storage by the number and type of drives is uses pretty much false. A case of not being able to judge the book by its cover.
You’ll need to discuss your workload with your storage vendor and determine how the array is going to handle your data and then rely on the vendor to size their solution properly for your need.