VxRail 3.5 Deployment to VLAN without DHCP and changing the Initial IP for VxRail Setup

VxRail on a specific VLAN:
If you have to use a tagged management VLAN for your VxRail cluster, you must customize the management VLAN directly on each appliance, via the ESXi Command Line Interface, before VxRail is initially configured. Changes are required for two different portgroups on all ESXi hosts. The first portgroup is the ESXi “Management Network”, and the second portgroup is the initial VxRail management network, called “VM Network”. During configuration the “Management Network” is renamed to “MARVIN Management” and a new “Management Network” is created using the details provided in setup through manual entry or via a JSON file, the second portgroup is renamed “vCenter Server Network”.

Login to each of the four nodes via the console interface(DCUI):
Press <F2> and login with root and Passw0rd!
Go to “Troubleshooting Options” and press <Enter>
Go to “Enable ESXi Shell” and pass <Enter> to enable
Press <ESC> to save
Use <ALT-F1> to access the Shell
Login with root and Passw0rd!
Use the following ESXi cli commands:

esxcli network vswitch standard portgroup set -p “Management Network” -v <Your_VLAN_ID>
esxcli network vswitch standard portgroup set -p “VM Network” -v <Your_VLAN_ID>
/etc/init.d/loudmouth restart

Verify the VLAN ID was set with this command

esxcli network vswitch standard port group list

Type ‘exit’ to logout as root and <ALT-F2> to get back to DCUI

If you do not have DHCP on the VLAN you can go into the Management Network and set static IPs for each node…I’m pretty sure we’ve all done this a few times before.
Change initial IP for VxRail Setup:
From a vSphere Thick Client connected to Node1, expand the host to show the VMs and open the console of VxRail Manager.
Login with root and Passsw0rd!
Open Xterm and use the following commands:

systemctl status vmware-marvin
systemctl stop vmware-marvin
ip addr add <your new ip/mask> brd <broadcast> + dev eth0
ip addr del 192.168.10.200/24 brd + dev eth0
ip route add default via <your_gw_ip>
ip link set eth0 down
ip link set eth0 up
ip a
/opt/vmware/share/vami/vami_set_network eth0 STATICV4 <new_IP> <new_subnet mask> <new_gateway>
systemctl restart vmware-marvin
sytemctl restart vmware-loudmouth
systemctl restart network (or wicked, which is the network management daemon)

I initially tried just running the “/opt/vmware/share/vami/…” as that was part of the original VMware EVO:RAIL Setup Guide, however, I was receiving errors from the wicked service that the IP could not be changed. The above steps worked out and did the trick for me.

Using ESXi Shell

Configuring ESXI Management Network

EVO:RAIL Setup Guide

VxRail and Data Domain Virtual Edition

VxRail has taken off since being announced and a great new edition to the Marketplace will be Data Domain Virtual Edition or DDVE; as of this post I haven’t heard of an availability date for the Marketplace. DDVE is available for download from EMC however. The download is a .zip that extracts to a Folder containing an .ova for installation.

The addition of DDVE enhances the package for a full operating solution in the ROBO space and Small Business space by complementing the included VMware Data Protection solution that is powered by EMC Avamar. DDVE includes:

  • DDBoost, for increased backup speed by up to 50%
  • Encryption, through inline encryption for data at rest
  • Data Domain Replicator with up to 99% bandwidth reduction, for those replicating backups to another location
  • Data Domain Management Center for a single management interface for DDVE and DD systems

VxRail stems from VMware’s EVO:Rail, and utilizes VMware vSphere 6 and Virtual SAN 6.1 in a 2U appliance that houses four Nodes and associated drives. Various models and specs can be found on the VxRail page at VCE.

Testing was done with a single VxRail 120:

  • Processor Cores per node: 12
  • Processors per node: 2 Intel® Xeon® Processor E5-2620 v3 2.4 GHz
  • RAM per node: 192 GB (12 x 16 GB)
  • Caching SSD per node: 400GB
  • Storage (Raw) per node: 3.6 TB
  • Network Connection per node: 2 x 10 GbE SFP+

VxRail utilizes several interfaces to accomplish tasks; VxRail Manager gives you a Dashboard overview and allows deployment of VMs in 3 sizes, Small, Medium, and Large from ISO. VxRail Manger Extension allows you to view the physical platform, dump logs, view the Community Forum most recent posts, setup Support and access the Market Place to add additional components like CloudArray, and eventually DDVE. In order to deploy an OVA/OVF you’ll have to access vCenter via the IP assigned during setup, and that is where we’ll deploy DDVE from.

Reading through the ‘Installation and Administration guide’, and ‘DDVE — Best Practices’ guide will get you acquainted with requirements and help plan your deployment. Until your license is applied you’ll be limited to 500GB, regardless of the size of disks you deploy, and that should be adequate for most testing purposes. One of the recommended deployment settings from both guides is to use ‘thick’ provisioning, “Thick Provision Lazy Zero” is recommended for faster deployment. As VxRail uses VMware Virtual SAN for storage we are not given the option of thick during an OVA/OVF deployment through vSphere, however there is a way to provide for this method. We can create a new Storage Policy that equates to Thick Provision Lazy Zero and here’s how:

  • From the vSphere Web Client Home page, select ‘VM Storage Policies’vSphereWebClientHome-VM_Storage_Policies
  • Select the ‘Virtual SAN Default Storage Policy’ and then click the ‘Clone a VM Storage Policy’ button vSphereWebClientVM_Storage_Policies-Clone_VirtualSAN_Default
  • We’ll make one change from the Default policy, setting the Space Reservation to 100%. The default is 0%, effectively Thin Provisioning, so changing this to 100% will give us a Thick Disk VirtualSANStoragePolicy-SpaceReservForThick
  • Save the Policy to your preferred policy name and that’s it.

Now we can deploy the OVA for DDVE, the deployment will provision the default 2 disks as thick with Disk 1 at 250GB (OS Disk) and Disk 2 at 10GB (Cache Disk). You can leave the Storage Policy at the default for this part, or select the new Thick Policy that we created.

Once deployed, and before starting, we will need to add the Storage Disk(s) and this is where we want to make sure the Thick Policy is applied per the recommendation from the  ‘Installation and Administration guide’ as well as the ‘DDVE — Best Practices’ guide.

Select the DDVE VM in vSphere Web Client and edit the settings to add the Storage Disk. Here I’ve added a 1TB Disk and chosen the Thick Policy that we created.VxRail-DDVE-QualTest-DDVE_1TB_Config-DiskAdd

After adding the disk our VM summary should look something like this VxRail-DDVE-QualTest-DDVE_1TB_Config

Now that we have our DDVE configured we can start the VM and go through changing the default password, setting up networking and adding our license. Here’s an example of the DAT test, before creating the file system to allow for Read and Write tests, showing VxRail would support up to a 16TiB DDVE config!

VxRail-DDVE-QualTest-DAT_test-Apr18.2016

 

Hope you enjoyed this bit of info and it wasn’t TL;DR.

For a follow up I plan to cover upsizing the DDVE for an 8TB config, stay tuned!

Here are a few links for more info on Data Domain Virtual Edition and VxRail:

*Saw this via Twitter today:

Virtual Ghetto, run by William Lam from VMware (@lamw), posted that the vSphere Web Client deploys all OVA/OVFs as ‘thick.

Thanks to EMC E2E Validation Lab Team, VxRail Engineering, DDVE Team, and Jase McCarty (@jasemccarty) for answering questions, helping with setup, and putting up with me in general!