Benefits of Cisco ACI (SDN) architecture

Cisco ACI, Cisco’s software-defined networking (SDN) architecture, enhances business agility, reduces TCO, automates IT tasks, and accelerates data center application deployments.

Why Today’s Solutions Are Insufficient:

Today’s solutions lack an application-centric approach. The use of virtual overlays on top of physical layers has increased complexity by adding policies, services, and devices.

Traditional SDN solutions are network centric and based on constructs that replicate networking functions that already exist.

ACI Key Benefits:

Centralized Policy-Defined Automation Management

  • Holistic application-based solution that delivers flexibility and automation for agile IT
  • Automatic fabric deployment and configuration with single point of management
  • Automation of repetitive tasks, reducing configuration errors

Real-Time Visibility and Application Health Score

  • Centralized real-time health monitoring of physical and virtual networks
  • Instant visibility into application performance combined with intelligent placement decisions
  • Faster troubleshooting for day-2 operation

Open and Comprehensive End-to-End Security

  • Open APIs, open standards, and open source elements that enable software flexibility for DevOps teams, and firewall and application delivery controller (ADC) ecosystem partner integration
  • Automatic capture of all configuration changes integrated with existing audit and compliance tracking solutions
  • Detailed role-based access control (RBAC) with fine-grained fabric segmentation

Application Agility

  •  Management of application lifecycle from development, to deployment, to decommissioning—in minutes
  • Automatic application deployment and faster provisioning based on predefined profiles
  • Continuous and rapid delivery of virtualized and distributed applications

ACI Technology Benefits

The main purpose of a datacenter fabric is to move traffic from physical and virtualized servers, bring it in the best possible way to its destination and while doing so apply meaningful services such as:

  • Traffic optimization that improves application performance
  • Telemetry services that go beyond classic port counters
  • Overall health monitoring for what constitutes an application
  • Applying security rules embedded with forwarding

The main benefits of using a Cisco ACI fabric are the following:

  •  Single point of provisioning either via GUI or via REST API
  • Connectivity for physical and virtual workloads with complete visibility on virtual machine traffic
  • Hypervisors compatibility and integration without the need to add software to the hypervisor
  • Ease (and speed) of deployment
  • Simplicity of automation
  • Multitenancy (network slicing)
  • Capability to create portable configuration templates
  • Hardware-based security
  • Elimination of flooding from the fabric
  • Ease of mapping application architectures into the networking configuration
  • Capability to insert and automate firewall, load balancers and other L4-7 services
  • Intuitive and easy configuration process

More information can be found at www.cisco.com/go/aci

Why is a smaller number of virtual CPUs better?

Note: This article is designed to serve as a high level introduction to the topic and as such uses a very basic explanation. Papers for those that wish to dive into more technical details of the topic are available elsewhere.

In a virtual environment such as VMware or Hyper-V, multiple virtual machines (VMs) operate on the same physical hardware. In order to make this function, a small piece of software, called a hypervisor operates to schedule the virtual resources with the physical hardware. As a virtual machine enters a state where CPU resources are required the VM is placed into a CPU ready state until enough physical CPUs are available to match the number of virtual CPUs.

The hypervisor will schedule VMs to available physical resources until all resources that can be scheduled are used.

Each VM will run on the physical CPUs until either it needs to wait for an I/O operation or the VM uses up its time slice. At that point the VM will either be placed into the I/O wait state until the I/O completes or be placed back in the ready queue, waiting for available physical resources.

As physical resources become available, they hypervisor will schedule VMs to run on those resources. In some cases, not all physical resources will be in use, due to the number of virtual CPUs required by the VMs in the ready state.

The process continues as VMs either wait for I/O or use their time slice on the physical CPUs.

In some cases there are no VMs in the ready state, at which point the scheduled VM will not time out until another VM requires the resources

Often a VM with fewer virtual CPUs will be able to be scheduled before one with more virtual CPUs due to resource availability.

In some cases a VM will complete an I/O operation and immediately be scheduled on available physical resources.

Algorithms are in place to ensure that no VM completely starves for CPU resources but the VMs with more virtual CPUs will be scheduled less frequently and will also impact the amount of time the smaller VMs can utilize the physical resources.

A VM with high CPU utilization and little I/O will move between the ready queue and running on the CPUs more frequently. In this case, the operating system will report high CPU utilization, even though the VM may not be running for a majority of the real time involved.

In these situations, operating system tools that run within the VM may indicate that more CPUs are required when, in reality, the opposite is actually the case. A combination of metrics at the hypervisor and at the operating system level is usually required to truly understand the underlying issues.

Lewan Achieves Cisco Master Collaboration and Master Cloud & Managed Service Designations

In addition to successfully passing the requirements and audit to re-certify as a Cisco Gold Partner, Lewan Technology is honored to announce achievement of two Master Specializations: Collaboration and Cloud & Managed Services.

Channel_Gold_87px_72_RGBChannel_MstrSrvcProvider_87px_72_RGBMasterCollaboration_290-px_RGB

“These Master level certifications are the absolute highest achievement that a Cisco partner can attain in any technology area. There are only 43 partners in the United States that hold these two certifications,” explained Ray Dean, Lewan’s Director of Networking and Communications. “This honor recognizes the great engineering teams and processes we have in place, as well as our commitment to ongoing customer satisfaction and solution integration.”

Cisco Gold Partner Certification

Gold Certification offers the broadest range of expertise across high growth market opportunities known as architecture plays – Enterprise Networking, Security, Collaboration, Data Center Virtualization and SP Technology. Gold Certified Partners have also integrated the deepest level of Cisco Lifecycle Services expertise into their offerings and demonstrate a measurably high level of customer satisfaction.

Lewan has been a Cisco Gold Certified Partner since 2005.

Cisco Master Collaboration Specialization

The Master Collaboration Specialization demonstrates the highest level of expertise attainable with Cisco collaboration solutions.

Master Collaboration Specialized Partners represent an elite partner community that has met the most rigorous certification requirements and are therefore the best for complex deliveries. Lewan demonstrated the ability to design and deploy solutions that conform to Cisco validated designs. In addition, Lewan showed current examples of successful projects in which we integrated multiple solutions and technologies to support client needs. No other Cisco specialization or certification demands such extensive proof of the partner’s design and implementation capabilities.

Cisco Cloud & Managed Services Master Service Provider

The Cloud and Managed Services Program (CMSP) helps partners respond to their customers’ business needs with innovative and validated Cisco Powered services. The exclusive Master Cloud and Managed Services designation recognizes partners at the highest level of achievement, competency and capabilities.

Lewan is recognized as a partner uniquely positioned to offer best-in-class Cisco Powered services and Cloud Managed services which are validated to insure security, reliability, and performance.

Lewan Awarded for Customer Satisfaction Excellence from Cisco

Channel_Gold_360px_72_RGBLewan is honored to announce achievement of Cisco’s Customer Satisfaction Excellence award. Customer Satisfaction Excellence is the highest distinction a partner can achieve within the Cisco Channel Partner Program.

“Congratulations to the entire Lewan team on this recognition of your great work over the past year and continued delivery of a world class customer experience to our customers,” said Fred Cannataro, Lewan’s CEO and President.

And from Cisco, “Customer Satisfaction Excellence is a core value we both share and a key driver of our current and future success. Thank you for your commitment to the success of your customers.”

Lewan will be recognized for Customer Satisfaction Excellence in the Cisco Partner Locator (www.cisco.com/go/partnerlocator) with a special star indicator representing our achievement. Customers, Cisco personnel and partners will be able to identify you as having achieved outstanding customer satisfaction as part of Cisco’s worldwide assessment process.

Channel Customer Satisfaction Excellence assessment is based upon the customer satisfaction results captured in the Cisco Partner Access Online tool.

About our Partnership with Cisco

Lewan has been a Cisco Gold Certified Partner since 2005. Gold status is Cisco’s highest partner designation.

Our team of engineers holds 46 individual Cisco certifications including CCIE certifications in Routing & Switching, Voice and Security. The Cisco Certified Internetwork Expert (CCIE) certification is accepted worldwide as the most prestigious networking certification in the industry. Combined with our sister companies, we offer the breadth and expertise of over 132 individual Cisco certifications nationally.

Our Cisco Certified engineers are able to offer assessments, performance evaluations, design workshops, and full installation and training around each of the areas that we hold Cisco certifications. Lewan’s certified areas of expertise are unified communications and business video, traditional route & switch environments, wireless, data center (including servers, storage, & virtualization), security & mobile device management.

Thanks to all who attended the IT Risk Event today…presentation can be seen here

What a great audience and thanks to the panel for keeping the content interesting and engaging.  The client engagement was excellent and hearing all the stories and ideas was awesome for all involved.

Thanks to Woodruff-Sawyer & Co, FORTRUST and Agility Recovery for helping Lewan Technology put on such a great event.

Click here to download the PowerPoint presentation:

it-risk-overview-customer-preso

Unable to get vCenter server certification chain error during vcOPs 5.8.1 install

During the deployment of the vcOPs vApp for a customer I ran into a new error – well, new for me. While the vApp (v5.8.1) deployed and booted fine, as I was registering it with the vCenter (v 5.1) as part of the initial configuration I got the following error: Unable to get vCenter server certification chain. Off we go to Google… Here’s a quick summary of things to check:

  • Confirm name resolution is working, username/passwords are right, etc.
    • Assuming Windows, RDP to vCenter with the user you’re attempting to use or try access via your favorite vSphere management tool
    • Hop on the console of the UI VM and ping the vCenter by IP and DNS name (username: root initial password: vmware)
  • Check that your vCenter certificate hasn’t expired. It’s the rui.crt file in c:\ProgramData\VMware\VMware VirtualCenter\SSL. This article has good info on locating and renewing your certificate, should that be your problem.
  • In the end, my fix came by importing the certificate file to the UI VM manually as outlined in this VMware KB article.
    • Full disclosure, the symptoms in the above article didn’t match my problem exactly and I don’t like just trying random fixes. However, when I found this Blog Post, in Spanish, with my exact error recommending a similar .cert import process I threw caution to the wind. The exact steps from the Spanish blog didn’t quite work, which could be a result of my inability to read Spanish and/or Google Translate not being perfect, but the VMware KB article was spot on.

After importing the certificate manually and restarting services, all was well and I was able to complete the configuration of vcOPs. By the way, did you know that since vSphere 5.1, all licensed versions of vSphere now include the Foundation edition of vcOPs? More than 5 hosts in your environment and you’ve got enough scale to warrant leveraging this tool. For a limited time, VMware is letting Lewan perform a free vSphere Optimization Check including a 60 day trial of the Standard Edition, complete with the capacity management features, dynamic thresholds, and root cause analysis. Give us a call today to test drive Operations Management!

Thinking about a VDI initiative? Watch this.

Lewan Solutions Architect, Kenneth Fingerlos, wowed the crowd last month at the GPU Technology Conference (GTC) 2014 with his presentation on VDI, “Virtual is Better than Physical Delivering a Delightful User Experience from a Virtual Desktop“.

GTC is the world’s biggest and most important GPU developer conference. Taking place in Silicon Valley, GTC offers unmatched opportunities to learn how to harness the latest GPU technology, along with face-to-face interaction with industry luminaries and NVIDIA experts.

Leveraging his industry leading expertise, Kenneth “delivered in spades,” as described in a review of his presentation for The Register:

The VDI talk was the kind of GTC session I love. It’s where a real-world expert talks about how a difficult task is actually accomplished. Not the theory, not how it should work on paper, but what it takes to actually move a project from Point “A” to Point “We’re done with this”.
Ken Fingerlos from Lewan Technology delivered in spades with his “Virtual is Better than Physical: Delivering a Delightful User Experience from a Virtual Desktop” GTC14 session. Delightful? Hmm…In my past lives, I’ve had to use some virtual PCs and my experience ranged from “absolutely unusable” to “omg I hate this”.
It’s easy to see that Fingerlos has been around the block when it comes to VDI. He has all the right credentials, ranging from VMware to Citrix to Microsoft. But more importantly, he’s been there and done it.

Read the complete review from theregister.co.uk

Kenneth’s GTC Presenter’s Bio

View the complete session and slide deck:
Untitled-1

 

Export SPCollect Logs from VNX Unisphere GUI

When working with EMC support, it may become necessary to export and upload SPCollect logs from your array and send to the EMC support team. Below is an easy way to obtain the requested SPCollect information.

  • Login to the GUI as an Administrator > Navigate to System
  • On the Panel (left or right hand side) click > “Generate Diagnostic Files – SPA” and “Generate Diagnostic Files – SPB”

diagnostic-links

  • You will immediately see a pop-up message with “Success”

SP-Logs-Success

  • Wait about 5 minutes
  • Click “Get Diagnostic Files – SPA”
  • You will see a file named:
  • [SystemSerialNumber]_SPA_[Date]_[randombits]_data.zip
  • The file should be around 15-20MB
  • If you file is smaller, you haven’t waited long enough for the correct file to be generated
  • Highlight the file and click “Transfer” to a destination you choose in the Transfer Manager window

SP-Manager

  • Repeat the steps for SP B

Due to file size, most email systems will not allow the .zip files to be sent. Login to the EMC support site and attach the files to your specific case.

Kaspersky PURE 3.0 Stuck Updating

If Kaspersky PURE 3.0 is stuck updating (22%ish) from version 13.0.2.558b to 13.0.2.558c and the log shows the file is stuck downloading on “stpass.exe.ap” (16mb)

Kasperksy Update Error

You are more than likely on a slow internet connection and your download is timing out.

To fix this issue, change your Update Source from the http default to ftp://ftp.kaspersky.com

This change could possibly work for other Kaspersky clients as there a variety of client updates listed in ftp://ftp.kaspersky.com

Media Agent Networking

I get a lot of questions about the best way to configure networking for backup media agents or media servers in order to get the best throughput.    I thought a discussion of how the networking (and link aggregation) works would help shed some light.

Client to Media Agent:
In general we consider the media agents to be the ‘sink’ for data flows during backup from clients.  This data flow originates (typically) from many clients destined for a single media agent.   Environments with multiple media agents can be thought of as multiple single-agent configs.

The nature of this is that we have many flows from many sources destined for a single sink.  It is important then if we want to utilize multiple network interfaces on the sink (media agent) that the switch to which it is attached be able to distribute the data across the multiple interfaces.  By definition then we must be in a switch-assisted network link aggregation senario.    Meaning that the switch must be configured to utilize either LACP or similar protocols.   The server must also be configured to utilize the same methods of teaming.

Why can’t we use adaptive load balancing (ALB) or other non-switch assisted methods?  This issue is that the decision of which member of a link-aggregation-group a packet is transmitted over is made by the device transmitting the packet.  In the scenario above the bulk of the data is being transmitted from the switch to the media agent, therefore the switch must be configured to support spreading the traffic across multiple physical ports.  ALB and other non-switch –assisted aggregation methods will not allow the switch to do this and will therefore result in the switch using only one member of the  aggregation group to send data.  Net result begin that the total throughput is restricted to that of a single link.

So, if you want to bond multiple 1GbE interfaces to support traffic from your clients to the media agent the use of LACP or similar switch assisted link aggregation is critical.

Media Agent to IP Storage:
Now from the media agent to storage we consider that most traffic will originate to the media agent and be destined for the storage.  Really not much in the way of many-to-one or one-to-many relationships here it’s all one-to-one.  First question is always “will LACP or ALB help?”  the answer is probably no.  Why is that?

First understand that the media agent is typically connected to a switch, and the storage is typically attached to the same or another switch.  Therefore we have two hops we need to address MA to switch and switch to storage.

ALB does a very nice job of spreading transmitted packets from the MA to the switch across multiple physical ports.  Unfortunately all of these packets are destined for the same IP and MAC address (the storage).  So while they packets are received by the switch on multiple physical ports they are all going to go to the same destination and thus leave the switch on the same port.   If the MA is attached via 1GbE and the storage via 10GbE this may be fine.  If it’s 1GbE down to the storage then the bandwidth will be limited to that.

But didn’t I just say in the client section that LACP (switch assisted aggregation) would address this?  Yes and no.  LACP can spread traffic across multiple links even if it has the same destination, but only  if it comes from multiple sources.  The reason is that LACP uses either an IP or MAC based hash algorithm to decided which member of a aggregation group a packet should be transmitted on.  That means that all packets originating from MAC address X and going to MAC address Y will always go down the same group member.  Same is true for source IP X and destination IP Y.   This means that while LACP may help balance traffic from multiple hosts going to the same storage, it can’t solve the problem of a single host going to a single storage target.

By the way, this is a big part of the reason we don’t see many iSCSI storage vendors using a single IP for their arrays.  By giving the arrays multiple IP’s it becomes possible to spread the network traffic across multiple physical switch ports and network ports on the array.  Combine that with using multiple IP’s on the media agent host and multi-path IO (MPIO) software and now the host can talk to the array across all combinations of source and destination IPs (and thus physical ports) and fully utilize all the available bandwidth.

MPIO works great for iSCSI block storage.  What about CIFS (or NFS) based storage?   Unfortunately MPIO sits down low in the storage stack, and isn’t part of the network filing (requester) stack used by CIFS and NFS.  Which means that MPIO can’t help.    Worse with the NFS and CIFS protocols the target storage is always defined by an IP address or DNS name.  So having multiple IP’s on the array in and of itself doesn’t help either.

So what can we do for CIFS (or NFS)?  Well, if you create multiple share points (shares) on the storage, and bind each to a separate IP address you can create a situation where each share has isolated bandwidth.  And by accessing the shares in parallel you can aggregate that bandwidth (between the switch and the storage).  To aggregate between the host and switch you must force traffic to originate from specific IP’s or use LACP to spread the traffic across multiple host interfaces.  You could simulate MPIO type behavior by using routing tables to map a host IP to an array IP one-to-one.    It can be done but there is no ‘easy’ button.

So as we wrap this up what do I recommend for media agent networking?   And IP storage?
On the front end – aggregate interfaces with LACP.
On the back end – use iSCSI and MPIO rather than CIFS/NFS.  Or use 10GbE if you want/need CIFS/NFS