Citrix Default Printer Won’t Retain

The Windows default printer is a magical thing. This is the printer that is selected by default when you print in an application. Depending on your particular printing workflow this may be the only printer you ever use. Some applications have a quick print functionality that sends a print job to the default printer using default settings and no prompts (for example, portrait orientation and a single copy). To make a printer your default, simply right-click it and select default printer.

default_printerWhen you use Citrix, a Windows default printer is still a Windows default printer. The difference is that Citrix has administrative policies to help you decide what will be the default.

I recently ran into an issue with a new XenDesktop v7.6 environment where users could select a new default printer using the method above but the next day when they logged on to their desktop it was set back to Microsoft XPS Document Writer. A quick note on Microsoft XPS Document Writer, as you may have noticed it installed on your computer, it is really a print-to-file driver Microsoft created to allow you to save print output in the Microsoft XML Paper Specification.  If you have never used it, do not feel bad, it is more likely you have used the immensely popular PDF format made popular by Adobe before becoming an open standard in 2008.

By default, the user’s current printer is used as the default printer for the session. For example, my laptop’s default printer is HP Deskjet 3520 series (Network).  When I logon to my Citrix desktop it will redirect the laptop printers into the session including my default printer.  That is ideal for a laptop user.

redirected_printer

For my next example, I am using a thin client that does not have a default printer because it does not have an OS. It can only connect to a Citrix desktop. When I logon from the thin client it will not see a default printer so it will make the first printer on the Citrix desktop the default. Often times this ends up being the Microsoft XPS Document Writer instead of the HP Deskjet 3520 series (Network).

At first, the issue seemed related to a Windows user profile issue since everyone lost their setting from one logon to the next.  After verifying that other Windows user settings were being retained (i.e. wallpaper, Office settings, and the printer mappings themselves), I moved on to Citrix print policies.   There is a specific policy I found interesting:

Default printer

citrix_default_printer_policyLooking closer at the policy it defaults to “Set default printer to the client’s main printer”.  Most of the time this will result in using the default printer on the user’s endpoint (e.g. laptop or desktop).  If that endpoint is a thin client or even an iPad it will not have a default printer to redirect so you will end up with the first printer in the session.

I made a new policy and set it to “Do not adjust the user’s default printer” and gave it a higher priority then the others and assigned it to my test user account.

citrix_default_printer_policy_detailsI then ran a gpupdate on each test worker to verify it had the new policy.  To test, I logged on with the test user, changed my default printer to a network printer.  I then logged out and put that test server in maintenance mode ensuring my next logon would go to the other test server.  Success, my new default printer was retained.  To be extra sure there was not anything cached locally, I rebooted both non-persistent workers and logged in again.  Success.  The final steps were to make the policy apply to more users and have them test before rolling it out to everyone on both the test and production workers.

Printing is rarely thought of as complicated but it always is.  If you are running into a similar issue then this policy change could be your answer.

Brian Olsen @sagelikebrian

Microsoft Excel Not Enough Memory or Disk Space

During a recent Deployment of XenApp 7.6 on Windows Server 2012 R2 when users ran an application that exported data to Excel they kept getting this error.

excel

Checking the XenApp session host server which was sized at 2vCPU and 8GB of RAM there was plenty of memory available as there was only one users logged into the server. Launching Excel then opening a workbook was fine and did not result in the error and after patching Office 2010 to the latest patch the error still persisted. After investigating there was no reason why this error would appear.

It would appear that this is a bug in Excel 2010 and Excel 2013 running on Windows Server 2012 R2 and excluding AppData\Local with Citrix Profile Management which is done to reduce the size of profile. With this configured the Cache folder ends up not having allocated enough space, the folder is part of the User Shell Folders in their profile.

cache

The solution. Redirect the user Cache directory to C:WindowsTemp, but doing so without the need to load the hive and hack the default profile’s NTUSER.dat.

First assign Users Modify rights to C:WindowsTemp, otherwise they will not have access and this will not work.

temp

Create a GPO Preference Registry Collection named something descriptive such as Excel Cache Directory

cachegpo

Create a new Registry Item pointing to: HKEY_CURRENT_USERSOFTWAREMICROSOFTWINDOWSCURRENTVERSIONEXPLORERUSER SHELL FOLDERS
The Value Should be Cache
The Data Should be C:\WINDOWS\TEMP
The Type Should be a REG_EXPAND_SZ

cachegposetting

Allow for the GPO to replicate and run a GPUPDATE /FORCE and test and you should no longer see the error.

The next time you encounter this issue give this a try. For more information please leave a comment.

Johnny Ma @mrjohnnyma

BriForum Comes to Denver

IT conferences are a great way to catch up on what is new, take classes, and network with peers in the industry. I have been lucky enough to attend great shows like Citrix Summit and Synergy as well as VMware VMworld over the years. The conference for me that always fell just out of reach was BriForum. This year it is all going to change. I am more than a little excited that one of the world’s premier IT conferences has chosen Denver, Colorado for this year’s US location. BriForum is an independent conference that provides vendor-neutral perspective on current and emerging technologies and services.

redrocks

Check out this year’s list of sessions:
http://www.brianmadden.com/blogs/gabeknuth/archive/2015/03/09/check-out-the-list-of-sessions-for-briforum-denver-2015-july-20-22.aspx

If you have a keen eye, you may have noticed a third of the way down the list a special session, “vSGA, vDGA, vGPU, and Software – When and Why“, being presented by Lewan’s very own expert speaker Kenneth Fingerlos (@kfingerlos).

kenneth_sm

Kenneth will be talking about the new graphics intensive workloads that are possible in VDI thanks to highend GPUs from NVIDIA.  He will specifically be digging into the different methods you can use to virtualize the GPU and when and why you would want to choose each method.  I promise you this will be a deep technical dive preparing you for your next graphics intensive virtual desktop project.

Check out the Lewan IT Solutions Technical Blog for more great technical information from Kenneth.

Come join Lewan at BriForum 2015 if you would like to learn more about solutions from Citrix, VMware, Microsoft and much more.

Brian Olsen (@sagelikebrian)

Cisco to Secure the IoE (Internet of Everything) by building Security accross their products

Cisco says it is adding more sensors to network devices to increase visibility, more control points to strengthen enforcement, and pervasive threat protection to reduce time-to-detection and time-to-response. The plan includes:

  • Endpoints: Customers using the Cisco AnyConnect 4.1 VPN client now can deploy threat protection to VPN-enabled endpoints to guard against advanced malware
  • Campus and Branch: FirePOWER Services solutions for Cisco Integrated Services Routers (ISR) provides centrally managed intrusion prevention system and advanced malware protection at the branch office where dedicated security appliances may not be feasible
  • Network as a Sensor and Enforcer: Cisco says it has embedded multiple security technologies into the network infrastructure to provide threat visibility to identify users and devices associated with anomalies, threats and misuse of networks and applications. New capabilities include broader integration between Cisco’s Identity Services Engine (ISE) and Lancope StealthWatch to allow enterprises to identify threat vectors based on ISE’s context of who, what, where, when and how users and devices are connected and access network resources.

StealthWatch can also now block suspicious network devices by initiating segmentation changes in response to identified malicious activity. ISE can then modify access policies for Cisco routers, switches, and wireless LAN controllers embedded with Cisco’s TrustSec role-based technology.

Cisco has also added NetFlow monitoring to its UCS servers give customers greater visibility into network traffic flow patterns and threat intelligence information in the data center.

Other aspects of the plan include Hosted Identity Services, which is designed to provide a cloud-delivered service for the Cisco Identity Services Engine security policy platform. The new hosted service provides role-based, context-aware identity enforcement of users and devices permitted on the network, Cisco says.

The strategy also includes a pxGrid ecosystem of 11 new partners that plan to develop products for cloud security and network/application performance management for Cisco’s pxGrid security context information exchange fabric. The fabric enables security platforms to share information to better detect and mitigate threats.

The company is also investing heavily in integrating its ASA firewalls with its Application Centric Infrastructure SDN,

More information can be found at http://www.networkworld.com/article/2932547/security0/cisco-plans-to-embed-security-everywhere.html

 

Marlins Score Big with Citrix

It seems like every other week there is an IT security breach that makes the news.  Many of these hacks score credit card information that can immediately be used or sold.  Recently there have been allegations that members of the St. Louis Cardinals hacked into the Houston Astros’ system to gather information on players.

New York Times – Cardinals Investigated for Hacking Into Astros’ Database
Kansas City Star – Astros GM Luhnow disputes details related to Cardinals hacking probe

At face value, it seems shocking to hear about hacking in Major League Baseball.  There was a time when America’s favorite pastime was not considered high tech.  It was the boys of summer playing a great game and the best team won.  In this Moneyball era of baseball statistics, numbers and data win big.

 

You don’t have to believe me, just ask Brad Pitt.

As soon as I heard the news it made me think of what the Marlins are doing with technology from Citrix.

 

The Marlins are scoring two big wins with Citrix.  First, they are doing things that have never before been possible and making a better experience for their customers.  Second, they have a focus on security that has kept their IT department out of national headlines while protecting their team and intellectual property.  It is hard to put a price on the total package.

We should not give all the credit to the Marlins’ IT foresight.  After all, the Simpsons predicted this way back in 1999.

Brian Olsen @sagelikebrian

Lewan Technology Named to CRN Solution Provider 500

Lewan has been recognized on CRN’s 2015 Solution Provider 500 list, as one of North America’s largest technology integrators, managed service provider and IT consultant.

“We are proud to be selected at a top provider again. We strive to exceed our customers’ expectations with our solutions and professional and managed services, delivered by our exemplary sales and customer service teams,” said Scott Pelletier, CTO at Lewan Technology.

SP 500 highlight

From CRN:

This annual list, spanning eight categories, from hardware and software sales to managed IT services, recognizes the top revenue-generating technology integrators, MSPs and IT consultants in North America. Solution providers are ranked based on revenue, determined by product and services sales during 2014.

“The companies represented here are truly dedicated to the needs of customers today. With an evolving IT landscape, this prestigious list serves as a valuable industry resource to help vendors navigate the solution provider community and identify the best partners for their business,” said Robert Faletra, CEO, The Channel Company. “We congratulate the featured solution providers for their forward-thinking approach to solutions sales and look ahead to their continued success.”

About The Channel Company
The Channel Company, with established brands including CRN®, XChange® Events, IPED® and SharedVue®, is the channel community’s trusted authority for growth and innovation. For more than three decades, we have leveraged our proven and leading-edge platforms to deliver prescriptive sales and marketing solutions for the technology channel. The Channel Company provides Communication, Recruitment, Engagement, Enablement, Demand Generation and Intelligence services to drive technology partnerships. Learn more at http://www.thechannelcompany.com.

Cisco Enhances SDN Strategy and Offerings Across the Entire Nexus Portfolio with new VTS Automation Solution

Interest in Software Defined Networking (SDN) continues to grow through the ability to make networks more programmable, flexible and agile. This is accomplished by accelerating application deployment and management, simplifying automating network operations and creating a more responsive IT model.

Cisco is extending its leadership in SDN and Data Center Automation solutions with the announcement today of Cisco Virtual Topology System (VTS), which improves IT automation and optimizes cloud networks across the entire Nexus switching portfolio. Cisco VTS focuses on the management and automation ofVXLAN-based overlay networks, a critical foundation for both enterprise private clouds and service providers. The announcement of the VTS overlay management system follows on Cisco’s announcement earlier this year supporting the EVPN VXLAN standard, which underlies the VTS solution.

Cisco VTS extends the Cisco SDN strategy and portfolio, which includes Cisco Application Centric Infrastructure (ACI), as well Cisco’s programmable NX-OS platforms, to a broader market and for additional use cases, which includes our massive installed base of Nexus 2000-7000 products, and to customers whose primary SDN challenge is in the automation, management and ongoing optimization of their virtual overlay infrastructure. With support for the EVPN VXLAN standard, VTS furthers Cisco’s commitment to open SDN standards, and increases interoperability in heterogeneous switching environments, with third-party controllers, and with cloud automation tools that sit on top of the open northbound API’s of the VTS controller.

Blog graphic

Cisco is committed to delivering this degree of interoperability and integration with multi-vendor ecosystems for all of its SDN architectures, as we have previously exhibited with ACI, with the contributions we have made on Group Based Policies (GBP) to open source communities, and with our own Open SDN Controllerbased on Open Daylight. With VTS, we now offer the broadest range of SDN approaches across the broadest range of platforms and the broadest ecosystem of partners in the industry.

Programmability | Automation | Policy

Programmable Networks: With Nexus and NX-OS Programmability across the entire portfolio, we deliver value to customers deploying a DevOps model for automating network configuration and management.  These customers are able to leverage the same toolsets (such as existing Linux utilities) to manage their compute and networks in a consistent operational model.   We continue to modernize the Nexus operating system and enhance the existing NX-APIs by adding secure SDK with native Linux packaging support, additional OpenFlow support and delivering an object driven programming model.  This enables speed and efficiency when programming the network while also securely deploying 3rd party applications for enhanced monitoring and visibility such as Splunk, Nagios and tcollector natively on the network.

Programmable Fabrics: Overlay networks provide the foundation for scalable multi-tenant cloud networks. VXLAN, developed by Cisco along with other virtualization platform vendors, has emerged as the most widely-adopted multi-vendor overlay technology. In order to advance this technology further, a scalable and standards-based control plane mechanism such as BGP EVPN is required. Using BGP EVPN as a control-plane protocol for VXLAN optimizes forwarding and eliminates the need for inefficient flood-and-learn approaches while improving scale. It also facilitates large scale deployments of overlay networks by removing complexity, fosters higher interoperability through open standard control plane solutions, and access to a wider range of cloud management platforms.

Application Centric Policy: Cisco will be able to offer the most complete solution on the Nexus 9000 series whether it is ACI policy-based automation or BGP EVPN-based overlay management.  Customers will now have a choice for running an EVPN VXLAN controller in a traditional Nexus 9000 “standalone” mode, or to leverage ACI and the APIC controller with the full ACI application policy model, and integrated overlay and physical network visibility, telemetry and health scores. VTS will support EVPN VXLAN technology across a range of topologies (spine-leaf, three-tier aggregation, full mesh) with the full Nexus portfolio, as well as interoperate with a wide range of Top of Rack (ToR) switches and WAN equipment.

VTS Design and Architecture

The Cisco Virtual Topology System (VTS) is an cloud/overlay SDN solution that provides Layer 2 and Layer 3 connectivity to tenant, router and service VMs. Cisco VTS is designed to address the multi-tenant connectivity requirements of virtualized hosts, as well as bare metal servers. VTS is comprised of the Virtual Topology Controller (VTC), the centralized management and control system, and the Virtual Topology Forwarder (VTF), the host-side virtual networking component and VXLAN tunnel endpoint. Together they implement the controller and forwarding functionality in an SDN context.

The Cisco VTS solution is designed to be hypervisor agnostic. Cisco VTS supports both VMware ESXihypervisor and KVM on RedHat Linux. VTS will support integration with OpenStack and VMware vCenter for integration with other data center and cloud infrastructure automation. VTS also integrates with Cisco Prime Data Center Networking Manager (DCNM) for underlay management. The Cisco VTC, the VTS controller component, will provide a REST-based Northbound API for integration into other systems.

Cisco VTS will be available in August. 2015

Source of Blog post was from  @ http://blogs.cisco.com/datacenter/vts

What the heck is an IOP (and why do I care)? Disk math, and does it matter?

I’ll start by answering the title question first.  IOP is an acronym standing for Input Output Operation.  It does seem like it should be IOO, but that’s just not the way it worked out.

A related bit of trivia, we generally talk either about total IOPs for a given task, or we talk about a rate – IOPs per second typically, noted as IOPS.

With that the Wikipedia portion of today’s discussion is complete.   Let’s move on to why we care about IOPs.

Most frequently the topic comes up in terms of either measuring a disk system’s performance, or attempting to size a disk system for a specific workload or loads.  We want to know not how much throughput a given system needs, but how many discrete reads and writes it’s going to generate in a given unit of time.

The reason we want to know is that a given storage system has a discrete number of IOPS it can deliver.  You can read my article on Disk Physics to get a better understanding of why.

In the old days this was mostly a math problem.   We knew that a 7.2K drive would deliver 60-80 IOPS, a 10K drive would deliver 100-120, and a 15K drive would give us 120-150 IOPS.   We also knew that we had to deal with RAID penalties associated with write operations to storage arrays.  Typical values were 1 IO penalty for RAID1 and 10, and 4 for RAID5 and 50.

The idea here was fairly simple.  If I needed a disk subsystem that would give me 1500 IOPS read, then I needed 10 15K drives to do that (1500/150 = 10).   If I needed 1500 IOPS write in a RAID10 comfit, then I needed 20 15K drives ((1500 + (1500 * 1))/150 = 20).   The same 1500 IOPS write in a RAID5 config took more spindles because of the RAID penalties but it was also easily calculated as 50 drives ((1500+(1500*4))/150 = 50).

That last by the way is how come database vendors have always asked that their logs be placed on RAID1 or RAID10 storage.  When writing to RAID5 storage it’s necessary to read the entire RAID stripe, recalculate, and re-write it.  Thus the 4 penalties.

The math got a bit more complicated when we had a mix of reads and writes.   What we have to do there is to calculate the read and write portions separately and then add the result together.  Suppose we had a workload of 3000 IOPS, where 50% was read and 50% was write.  Thus we’d have 1500 IOPS read and 1500 IOPS write.   On a RAID10 system we’d need 10 drives to satisfy the reads, and 20 drives to satisfy the writes.   A total of 30 drives then is needed to satisfy the whole 3000 IOPS workload.

Those were the old days when we could pretty easily look at a disk subsystem and calculate how much performance it should deliver.  Modern disks however have changed the rules some.

How did they change the rules?   Well, basically they have a way of making IOPs disappear.

Consider for a moment NetApp’s WAFL configuration.   WAFL works by caching write operations to an NVRAM on the controller, and telling the application that the IO is complete.   No physical IO operation has actually taken place.  Now, thus far this sounds like a write back cache, but here’s the difference.  WAFL doesn’t just perform a “lazy write” of the cached data, it actually waits until it has a series of writes which need to be written to the physical disks, and then it looks for a place on disk where it can write all of those blocks down at once in sequence.  Thereby taking perhaps 4 or 10 (or more) physical IOPs and combining them into one.   WAFL actually takes this a step further by looking for places on disk where it doesn’t have to read the stripe before writing it in an attempt to also avoid paying the RAID write penalties.  This last is the reason WAFL performance degrades as the disk array becomes very full; it becomes harder to find unused space.

Another example of vanishing IOPs is Nimble’s CASL filesystem that expands on what WAFL does by doing two additional things.  First, it compresses all the data as it comes into the array, which further reduces the number of IOPs necessary to write the data.  Second CASL is based around the idea of having very large FLASH memory based caches so that physical IOPs to spinning disk can be avoided for reads.   The net of this being that write IOPs are reduced and read IOPs are nearly eliminated completely.    In testing done by Dan Brinkman while he was at Lewan, a Nimble array with 12 7.2K disks was clocked at over 18,000 IOPS.  We know that the physical disks were capable of no more than 960 IOPS (80 * 12 = 960).  This is a testament to how effective CASL is at reducing physical IOPs.

A third example of IO reduction is what Atlantis Computing does in their Ilio and USX products when dealing with persistent data (in-memory volumes is a topic for another day).   Atlantis takes the idea of caching and compression further still by adding inline data deduplication, wherein data is evaluated before being written to determine if an identical block has already been written.   If it’s an identical block then no physical write is actually performed for the block, and the Filesystem pointer for that block is merely updated to reflect an additional reference.    Atlantis caches the data (reads and writes) in RAM or on FLASH as well to further reduce physical IO operations.

The extreme case of this is the all-flash storage array (or subsystem), which is available from many vendors these days (Compellent, NetApp, Cisco, Atlantis, VMware vSAN, all offer all flash options and there are many more options as well).   All flash arrays eliminate physical disk IO by eliminating the physical disks. They’ve made the FLASH cache tier so large that there is no longer any need to store the data on a spinning drive.  There is still an upper bound for these arrays but it’s tied to controllers and bandwidth rather than the physics of the storage medium.

So what’s the net of all this?

The first part is that storage has gotten smarter and more efficient by making better use of CPU’s and memory.  Letting them deliver higher performance and better data density with fewer spinning drives.

The second part of the answer is that the old-school disk math around how many IOPS you need and how many spindles (spinning disks) will be required is largely obsolete.  Unless you’re building an old-school storage array or using internal disks in your server the storage is probably doing something to reduce and/or eliminate physical disk IOPs on your behalf.  Making the idea that you can judge the performance of the storage by the number and type of drives is uses pretty much false.   A case of not being able to judge the book by its cover.

You’ll need to discuss your workload with your storage vendor and determine how the array is going to handle your data and then rely on the vendor to size their solution properly for your need.

How to Fix Java issues with Citrix Netscaler GUI

We have all encountered the dreaded Java error when trying to connect to the Citrix Netscaler GUI.  In this post I would like to walk through the steps of resolving those Java error messages. There are a few technical articles that TRY to walk you through the process of troubleshooting this issue, but I have found the method that I use to be the most successful.  For me this is one of the most frustrating error messages, as I am constantly working in different versions of Java, Netscaler firmware or browser.

Auth

For starters, lets go ahead and uninstall any version of Java you currently have installed.  Most versions of Netscaler 10.1 and above will support the most recent version of Java.  You can download the most recent version Here.  For this exercise, we are going to assume you are using chrome, Firefox or IE.  In my experience, I have had the most success with the Netscaler GUI and the Chrome browser.

After you have successfully installed Java and went through the confirmation process go ahead and browse to your java configuration applet or go to control panel > Java (32bit).

Once the Java Control Panel pops up, click on the Settings button.

Auth

You will now be redirected to the Temporary Internet files dialog.  First, click on the “Delete Files” button

Auth

One the “Delete Files and Applications” box appears, UNCHECK all of the checkboxes and click OK.

Auth

Before clicking out of the Temporary Internet files dialog, make sure to uncheck ” Keep Temporary files on my computer” and click OK.  Having all of these temporary files are one of the main causes for applet corruption.

Auth

That last set of steps will clear out all the previously downloaded temporary applets, cookies and certificates you currently have in your configuration.  If you are launching java for the first time after the new install this might be a moot point, but I do it anyway 🙂

Now, stay in the Java Control Panel and at the top, click on the “Security” Tab.  Inside of that tab, click on “Edit Site List” at the bottom.

Auth

Once you have clicked on Edit Site list, Click on Add.  Here you will be able to add the Netscaler access gateway FQDN as an exception.  Only add websites here that you know you can trust their certificate.

Auth

After you click add you will notice a text box appear in the same window.  Go ahead and add your Netscaler FQDN into that field and click OK  example:  Https://yournetscaler.yourdomain.com

Auth

After clicking OK, you will notice your Netscaler FQDN is now in the exceptions list.  Click Ok to exit the Java Control panel and relaunch your browser to test.

Auth

 

This article applies to Netscaler versions 9.3, 10.0, 10.1

Let me know how it goes.  Add your comments below!

 

 

Kevin B. Ottomeyer @OttoKnowsBest

 

 

Configuring Citrix Storefront Domain Pass-through with Receiver for Windows

I would like to discus the procedure for configuring and implementing Domain Pass-through with Citrix Storefront and Citrix Receiver.

First things first, let’s get a receiver installed on a test machine.

****Note, this machine and all subsequent machines must be a member of the domain that your storefront server is currently attached to in order for the pass-through to work.

Download the Citrix receiver Here

Once downloaded find the path of your download location.  Now, we will need to install the receiver with the single sign on switch as follows:User-added image

This will install the receiver, enable and start the single sign-on service on that machine.  After your installation is completed and the machine is rebooted,  log back in to your workstation and double-check to make sure the ssonsvr.exe service was installed and is currently running under services.

User-added image

Once you have confirmed.  Lets move over to your Storefront server.

Launch the Storefront administration console from the storefront server and on the left side of the console, click on Authentication.

Auth

Once authentication is selected move over to the right side of the console screen and under actions > authentication, click on add/remove Methods.

Auth

After clicking on Add/Remove Methods, a dialog box should appear with options to select what methods you would like to enable in Storefront.  The second option from the top is, “Domain pass-through”, click on the check box next to that option and click OK.  This will enable Storefront to take the credentials from the ssonsvr service on your workstation and pass them through Storefront and enumerate the app list without authenticating twice.

Auth

Depending on your Citrix infrastructure, you might need to propagate the changes to the other Storefront servers in your Server Group.  If you have more than one Storefront server and you do not propagate changes, you might see mixed results in your testing.

To do this, click on “Server Group” on the right side of the console and then on the left side under actions, click on “Propagate Changes”.    This action will replicate all the changes you just made to your authentication policies over to the other Storefront servers in your Server Group.

Now that you have all the configuration pieces in play, reboot the workstation you installed the receiver to and log back in.  Once logged in your should be able to right-click on the receiver and click open.  Receiver will now prompt you for your Storefront FQDN or email address if you have email based discovery enabled.  At this point your application list should enumerate without prompting for credentials. This also goes for the Web portal. Test both to make sure they are passing those credentials through appropriately.

********If your credentials still do not pass through, below are a few troubleshooting steps you can take.  Of course this all depends on how your environment is set up and what access you have to modify certain components in your windows infrastructure.

Modifying local Policy to enable pass-through on the workstation

Apply the icaclient.adm template located in C:\Program Files\Citrix\ICA Client\Configuration to the client device through Local or Domain Group Policy.

Once the adm template is imported, Navigate to Computer Configuration\Administrative Templates\Classic Administrative Templates\Citrix Components\Citrix Receiver\User authentication\, then double-click on the “Local user name and password” setting.

User-added image

The following box should appear and make sure to select both “Enable pass-through authentication” and “Allow pass-through authentication for all ICA connections”.

User-added image

Adding Trusted Sites in your browser

On the same workstation you are testing the pass-through.  Open IE and navigate to Tools > Internet Options.  Click on Trusted Sites and add your Storefront FQDN (the same address you entered into the receiver when you set it up.

Auth

Also, it wouldn’t hurt to configure pass through in IE.  In The Internet Options Security tab with Trust Sites selected, choose Custom level, security zone. Scroll to the bottom of the list and select Automatic logon with current user name and password.

User-added image

Configure the NIC provider order

On the workstation you installed the receiver, launch control panel and click on Network Connections, choose Advanced > Advanced Settings > Provider Order tab and move the Citrix Single Sign-on entry to the top of the Network Providers list.

User-added image

If you are still having problems with the receiver not passing the credentials, leave a comment with your specific issue.

References:

https://www.citrix.com/downloads/citrix-receiver.html

http://support.citrix.com/article/CTX200157

 

 

Kevin B. Ottomeyer @OttoKnowsBest