Archive

Posts Tagged ‘Cisco’

iSCSI Boot with ESXi 5.0 & UCS Blades

December 21, 2011 Leave a comment

UPDATE:: The issue was the NIC/HBA Placement Policy.  The customer had set a policy to have the HBA’s first, then the iSCSI Overlay NIC, then the remaining NICs.  When we moved the iSCSI NIC to the bottom of the list, the ESXi 5.0 installer worked just fine.  I’m not 100% sure why this fix is actually working, but either way it works.

So at a recent customers site i was trying to configure iSCSI Booting of ESXi 5.0 on a UCS Blade, B230 M2.  To make a long story short it doesn’t fully work and isn’t offically supported by Cisco.  In fact, NO blade models are supported for ESXi 5.0 & iSCSI boot by Cisco.  They claim a fix is on the way, and i will post an update when there is a fix.

Here is the exact issue, and my orgianal thoughts, in case it helps anybody;

We got an error installing ESXi 5 to a Netapp LUN.  Got an error “Expecting 2 bootbanks, found 0” at 90% of the install of ESXi. The blade is a B230 M2.

The LUN is seen in BIOS as well as by the ESXi 5 installer.  I even verified the “Details” option, and all the information is correct.

Doing an Alt-F12 during the install and watching the logs more closely today, at ~90% it appears to be unloading a module, that appears by its’ name, to be some sort of vmware tools type package.  As SOON as it does that the installer claims that there is no IP address on the iSCSI NIC and begins to look for DHCP.  The issue is during the configuration of the Service Profile and the iSCSI NIC, at no time did we choose DHCP, we choose static. (We even have tried Pooled)  Since there is no DHCP Server in that subnet it doesn’t pickup an address and thus loses connectivity to the LUN.

So we rebooted the blade after the error, and ESXi5 actually loads with no errors.  The odd thing is that the root password that’s specified isn’t set, it’s blank like ESXi 4.x was.

So an interesting question is what’s happening during that last 10% of the installation of ESXi 5??  Since it boots cleanly, it almost seems like it does a sort of “sysprep” of the OS, ie all the configuration details.  If that’s the only issue then it might technically be ok.  However I don’t get the “warm and fuzzies”.  My concern would be that, maybe not today but down the road some module that wasn’t loaded correctly will come back to bite the client.

Also, what is happening in that last 10% that’s different then ESXi 4.x??  We were able to load 4.1 just fine with no errors.

Again we called Cisco TAC and we were told that ESXi 5 iSCSI booting wasn’t supported on any blade.  They do support 4.1 as well as Windows, and a variety of Linux Distos.

Good questions asked during UCS Design Workshop

June 1, 2011 1 comment

So i’ve recently started working for a large technology company on their Datacenter Services team in their Professional Services org.  Its been quite an experience so far, and i’m doing my first solo Cisco UCS Design Workshop coupled with an installation as well some basic teachings.

I was asked some good questions and figured that others may be asked the same things as well as may just have the questions themselves. I figured i can share and maybe help somebody else.  I will try and keep this page updated with some of the more interesting questions that aren’t easily found in Ciscos documentation.

Q1. According to Cisco’s documents when you’re using the VM-FEX or Pass Through Switching there is a limit of 54 VMs per server when those hosts have 2 HBA’s.  What is the real reason for the low limitation?  As with todays high-powered servers 54 VMs isn’t an unreachable goal.

A1. The 54 limit is based on VN-tag address space limitations on the UCS 6100 ASICs.  Future hardware for UCS will support more.  PTS may not be the right fit for high density virtual deployments, especially VDI.   Here is a link to a great blog on it.  http://vblog.wwtlab.com/2011/03/01/cisco-ucs-pts-vs-1000v/

Q2. What is the minimum number of power supplies needed for a UCS Chassis?

A2. The answer is 2, especially a fully populated one.  In this case you are running in a non-redundant mode.  If one of the power supplies fail, the UCS System will continue to power the Fans and the IO-Modules.  It will however begin to power off the blades in reverse-numerical order until it reaches a supported power load.

Q3.  Can you change the number of uplinks from the IO-Modules to the Fabic Interconnects once the system is online?

A3. Changing the number of cables from FI to the chassis requires a re-pinning of the server links to the new number of uplinks.  The pinning is based on a hard-coded static mapping based on number of links used.  This re-pinning is temporarily disruptive to the A fabric then the B fabric path on the chassis.  NIC-teaming / SAN multi-pathing will handle failover/failback if in place.

Q4. If the uplinks from the Fabic Interconnect are connected to Nexus switchs, if we dont use vPC on them do we lose the full bandwidth because the switches are in an active/passive mode?? Can you get the full bandwidth using VMware and no vPC?

A4. Even without vPC the UCS Fabric Interconnects will utilize the bandwidth of all of the uplinks, no active/passive.  However, I would still recommend configuring VMware for active/active use but ensure you are using MAC or Virtual port based pinning rather than Src/Dest IP hash.

Q5. So is there any advantages to doing the vPC other then the simplified management??

A5. Two of them, Faster failover, and potential for a single server to utilize more than 10Gbps based on port-channel load-balancing.