UCS Quick Installation Guide
- Base Implementation UCS. 3
- Initial configuration (on site). 3
- GUI (remote) connectivity. 5
- Setting Up KVM Management IPs for Blades. 6
- Port configuration. 7
- Chassis discovery. 10
- Installing Servers. 12
- Creating VLANs. 12
- Create Organizations. 13
- Create policies and pools in organization. 13
This document will give a short overview of the UCS implemtation at Telkom SA Bellville. It is not a runbook but rather a reference to which parameters have been used for the quickstart implementation. In conjunction with the official UCS documentation it is possible to reinact the installation, which will be helpful when the UCS goes production.
For physical cabling please reference the Logical Site Prep that will be distributed with this document.
When the UCS is powered on for the first time it needs to be configured directly via serial to provide the first parameters. This is done via the Cisco typical console port (found on the fabric interconnects (FI)), which requires a special cable (supllied with the UCS). Use any serial terminal program to connect using the following parameters:
UCS configuration can be done on any of the fabric interconnects first, which will then automatically become fabric interconnect A (FI A). The following configuration gives an overview of the parameters needed for the first interconnect:
|IP for FI 1||10.145.3.8/25|
|IP for FI 2||10.145.3.9/25|
Connect to the other Node as with the first. The second Fabric Interconnect (FI B) requires less information as it will connect to FI A and sync most information.
Now that the base configuration has been done, it is possible to connect to the management network. This is done via a browser (Internet Explorer or FireFox) Please ensure that the newest version of Java is installed and that security settings in the browser are set to allow the application to run.
Type in the Virtual IP configured in the previous step into the browser:
Click on the Launch button to start the UCS Manager Java application. The first time this is done for a specific firmware level, the application will we downloaded onto the local management machine which can take up to 2 minutes. There will be a login request. For the purposes of this document, the admin user will be used for full rights, please note that in a production environment you should designate user with specific rights.
The UCS requires a management address for each blade to allow KVM sessions and virtual media connectivity. This is done by assigning a pool of IP addresses that the UCS assigns to each blade.
|ChassisNr||Start IP/Subnet||End IP(8 per Chassis)||Gateway|
|1 and 2||10.145.3.11/25||10.145.3.26||10.145.3.1|
The pool of admin addresses is set up in the UCS manager GUI:
A freshly installed UCS will not find it chassis automatically. It is necessary to define which ports on the FIs are connected to the chassis and which are used to connect to the network.
This is a simple process that requires the following steps:
The UCS comes with fixed ports on the FI and expansion ports for the GEM slot. Only the fixed ports can be used to connect Server Ports.
Initially all ports will be “Unconfigured Ports” simply drag and drop the correct ports into “Server Ports”
From the Logical site prep guide, it can be seen that the following ports are connected to the chassis:
Note: Keep in mind that all configuration done on FI A must also be done on FI B. Tho not mandatory, it is highly recommended that identical configurations are used for both FIs
In a similar fashion drag and drop the ports connected to the upstream network by dragging them from “Unconfigured ports” to “Uplink Ethernet Ports”.
|Portchannel Name||Fabricinterconnect/UCS Ports||Northbound Switchname/Port|
|Portchannel 39||FI A/1/1||Switch1/??|
|Portchannel 39||FI A/1/2||Switch2/??|
|Portchannel 40||FI B/1/1||Switch1/??|
|Portchannel 40||FI B/1/2||Switch2/??|
In this case, ports 3 and 4 on each FI have already been configured for future expansion, even though no physical cabling exists.
For this implementation the UCS is connected via VPC to two Nexus 7K in the following fashion:
From the UCS perspective, the ports must be configured in a simple port channel as seen in the table above:
Keep in mind that this must be done for both sides with a different port channel (In this case 39 for FI A and 40 for FI B)
In case any configuration changes are done to the port channel, it may be necessary to re enable the portchannel from the UCS manager:
The SAN is configured equally easily. FC ports can be found on the expansion ports. If VSAN technology is used it is necessary to first create a VSAN that can be assigned to each FC port. Though it is possible to create different VSANs for different ports it is best practice to configure the same VSAN for all ports on one specific UCS (different ports should only be used if “FC pinning” is used thereby splitting FC traffic for very high secure environments)
Click on each FC port and select the correct VSAN:
Dont forget to repeat the same steps for all FC ports. When new ports are connected to the SAN switch, it may be necessary to re enable the FC Ports that have been added.
The moment the Server Ports were configured, the UCS will automatically discover connected chassis, it will however only configure the first Server link per IOM( there are two IO Modules situated in the chassis) therefore limiting the bandwidth to the minimum configuration of 20 GBit per chassis. To make use of the additional connections (in this case there are two connections per IOM which means there are 4 connections in total for 40 Gbit per chassis) it is necessary to “acknowledge” each chassis separately.
NOTE: Please keep in mind that “achnowledging a chassis can be a disruptive process for all blade that are in that specific chassis.
Repeat the process for every chassis separately . Check if all connections have been acknowledged by navigating to the IOM modules of each chassis:
All VLANs that pass through the FIs must be explicitly allowed for each interface. To do this, VLANs must be defined in the LAN tab and given a name. Note that when assigning VLANs to interfaces, only the given name will appear in the selection, so it may be a good idea to add the VLAN number to the name. Once a VLAN is defined it is NOT possible to rename it, and it must be deleted and recreated.
The UCS has the ability to split up resources into different tenancies. For this to work efficiently, it is necessary to create sub organizations. As a default, the base level is always the root level, on which there is full administrative access. Though it is possible to work on this level only ,it is best practice to create sub organizations in order to restrict access on organizational level.
For each organization it is possible to create separate policies that can only be used within that organization or sub organizations within. In this document we will look at the most important ones used in this implementation. It may however be well worth looking at the other ones, as these generally can save quite a bit of repetitive configuration.
Boot policies can be used to determine in which way a service profile should access its boot devices. This is particularly important in a SAN boot environment, where it is necessary to specify SAN boot targets. Tho this configuration could be done for each Service profile individually, it is highly recommended that this put in a profile with the correct settings.
The boot policy requires a unique name and description. Add local bootdevices if needed (CD-ROM). After that add the SAN boot HBAs. In this configuration each blade has 2 HBAs, and it is possible to specify primary and secondary boot HBA.
Each HBA can have two boot targets, which must be zoned accordingly to the correct LUN.
Repeat the process above for secondary boot target and do the same for the second HBA (vhba1). If everything is set up correctly the result should look like this:
Each object in a virtualized environment makes use of an unique identifier called the UUID. As the name states this value must be unique within the domain of servers. The UCS Manager can draw suck UUIDs from predefined pools, and will ensure uniqueness within the UCS domain.
It is possible to create such a pool globally for the whole UCS domain, or specifically for each suborganization.
UUIDs are split up into a prefix and a suffix. While the prefix remains the same, it is possible to create multiple suffix blocks within one pool.
With the service profile technology, the physical blade used to run a specific profile, can be selected manually, or grouped together in a pool. The latter allows the service profile to select available resources as needed, irrespective of the physical slots currently populated in the chassis. This allows for greater flexibility and transparency. Since a service profile will select the next available blade in a pool automatically, it is best practice to pool blades with similar hardware configurations. As with UUID pools, server pools can be created globally for the whole UCS or specific for a suborganization.
For this implementation a server pool with 24 GB ram and Qlogic CNAs is formed across all blades in the chassis.
Simply add all blades to the pool that may be used by the service profiles assigned to this specific pool.
There are other pools and policies that can be used to setup default parameters.
A Service Profile is a collection of data used to set the unique identifiers of a server and the operational parameters used to run it. For the UCS, a service profile is synonymous for a traditional server, making use of the resources, pools and policies defined earlier in this document. The UCS has two different wizards to create a service profile, the expert wizard offering more options and allowing for advanced features like multiple VLANs etc. In this example only the expert wizard will be utilized.
Each Service Profile needs a unique identifier, which is also the title of the actual profile. Using the hostname of the actual server makes it easier to administer Service profiles later.
Additionally a UUID needs to be set for this specific profile. Selecting a pool will automate this process, tho it is possible to assign a UUID manually.
Even if booting from SAN, if physical local disks are present, it is necessary to configure the local storage. This can be done with a policy or simply for each Service Profile.
Additionally if HBAs are used a WWNN must be set for the adapter.
The Menlo CNA has the ability to create up to two HBAs. Click on the “Add” button to add a HBA to the list. Each of these HBAs needs a WWPN and a few specific settings. The following table specifies the settings for both HBAs.
Similary up to two network adapters are supported for the Menlo CNA. In order to use multiple VLANs via a trunk, it is necessary to use the “expert” LAN configuration.
Click the Add button to add a vnic.
Each vnic needs specific settings and a MAC address. As with the HBAs it is possible to assign these MACs via a pool or manually for each adapter.
|VNIC||FABRIC||Failover||VLAN Trunking||Adapter Policy|
Additionally all VLANs that will be used on this interface (even via virtual machines) must be allowed specifically for each adapter. Below is a configuration of vnic0:
For this dialog just leave the default setting “Let System Perform Placement.
Each Service Profile needs a boot order. This was already configured in the Boot policy section of this document, and therefore only needs to be selected.
This dialog allows to manually specify which physical blade is associated to this Service Profile, or allows to utilize a Pool.
Click on the “Finish Button after this dialog. Once the Service profile has been created, click on it and navigate to the general tab as shown below.
Initially the Overall status will be in “config”, meaning that the Service Profile is being applied to the physical blade. If everything was configured correctly the Overall Status should be “OK”. If the Overall Status is “config-error”, something was done wrongly in the configuration, or a resource is being assigned that does not physically exist (example assigning 3 network cards to a Menlo CNA).
Once the Service Profile is up and running it is necessary to connect to it via the KVM applet found on the General Tab of the Service Profile.
The KVM applet allows to connect to local drives of the management station and even ISO files located locally. Select Tools and Launch Virtual Media.
For mounting an ISO file, select Add Image and browse for the correct file locally
Finally enable the drive by enabling the mapped checkbox next to the correct image.
Once mounted boot and install server like any other depending on the boot order that was specified in the Boot Policy.
Note: Do NOT close this window as the virtual media session will close and the connection will break off.
To install the Virtual Ethernet Module of Nexus 1000V onto the ESX host, copy the newest file to any location on the ESX host and execute the following command in the directory the VEM file has been copied. Though a reboot is not necessary it is highly recommended.