Tags

Please reload

Deploy and configure HCX for VMware Cloud on AWS Part -2

September 4, 2018

In part-1 we have deployed HCX in the cloud as well as in on prem and connected these 2 as well. In Part-2 we will deploy and configure the interconnect components, I have already given an introduction to this components in Part-1 however lets do a recap here as well..

 

1- HCX Interconnect service : HCX Interconnect service provides resilient access over the Internet and private lines to the target site while providing strong encryption, traffic engineering and extending the datacenter. This services simplifies secure pairing of sites and management of HCX components.

 

3- WAN Optimization Service (Optional) : Improves performance characteristics of the private lines or Internet paths by leveraging WAN Optimization techniques like data de-duplication and line conditioning. This makes performance closer to a LAN environment.

 

4- Network Extension Service: High throughput Network Extension service with integrated Proximity Routing which unlocks seamless mobility and simple disaster recovery plans across sites.

 

Further to this we will also get some more understanding of these components once we will deploy them so let's do it :

 

1) To deploy these services we will login to the onprem vSphere-Web-Client and open the HCX Plugin and go to the  Interconnect option and select HCX Components. 

 

2) Click on Install HCX Components

 

3) Select the Components to be installed

 

HCX Interconnect Service will be required for Bulk Migration (replication based Migration) as well as vMotion (Live Migration)

 

WAN optimization will deploy an extra appliance that will will enable the compression and de-duplication capabilities. 

 

Network Extension service will be used for Following requests

 

  • Use Extension with Migration to keep Virtual Machine IP and MAC addresses during migration.

  • Extend VLANS from VMware’s vSphere Distributed Switch.

  • Extend VXLANs (Requires NSX integration in the HCX Appliance Management interface).

  • Extend Cisco’s Nexus 1000v networks.

 

Let's Click on Next which gives us a form to be filled for IX (Interconnect)

 

  1. Network : Select a distributed port group. The interface connected to the selected network is used for management of the appliance, for HCX internal communications, and for the migration protocols. Selecting the ESX Management network is preferred.

  2. Cluster/Host: To deploy the service, select a resource VM. Ensure that the appliance is not resource constrained for maximum migration performance.

  3. Datastore : Use the flash/high performance tier datastore for maximum migration performance. HCX Interconnect disks are 1.5 GB.

  4. IP Address/Prefix Length : Provide an available IP address and prefix length  for the network selected .

  5. Default Gateway : The network gateway IP address for the specified network.

  6. DNS : Provide the local DNS server IP address.

  7. vMotion Network : Select the vMotion distributed port group. If the management network defined in step 1 is also used for vMotion, leave this left blank.

  8. vMotion IP Address/PL : Provide an available IP address and prefix length in the same range as the vMotion network for the network selected. Skip if 7 was left blank

Click Next

 

 Configure this setting only if there is a need to restrict the bandwidth available for HCX.

 

Click Next 

 

 

Deploying the Network Extension Service appliance allows networks to be extended in the vSphere Web Client. The Remote HCX-NET-EXT appliance is created automatically whenever a local appliance is deployed. The HCX-NET-EXT service appliance is always deployed as a pair.

 

  1.  Compute : To deploy the service VM, select a resource; Consider the Deployment Options implications when selecting the resource. See the preceding Deployment Options section for details. 

  2. Datastore : Select a datastore for deploying the Extension appliance. Network Extension appliance disks are about 1.5 GB total.

  3. Network : Select a distributed port group. The interface connected to the selected network is used for management of the appliance and HCX internal communications with the HCX Manager

  4. VM Hostname : Specify a friendly name for the HCX Network Extension VM.

  5. IP Address/Prefix Length : Provide an available IP address and prefix length (e.g 255.255.255.128 = PL 25) for the selected network.

  6. Default Gateway : The network gateway IP address for the specified network.

  7. Passwords : Set the admin and root passwords. Click Next and configure the other selected services, or to the Ready to Configure screen.

 

 

Click on Finish and we can monitor the tasks at cloud side as well as at On-prem side. you will notice few peer appliances will get deployed at both the sides. 

 

This will take some time for these components to show up in the HCX component section of the interconnect tab.

 

Once everything is up and running we will get tunnel status as up for Migration service and network extension service individually.

 

 

 

 

Cool Now let us try some migrations :)

 

Let's click on the Migration tab and select the option Migrate Virtual Machine.

 

This will open the Migrate virtual Machine to remote Site Window (By default this is onprem to cloud migration), if we want to perform cloud to onprem Migration we can select the first check box which says "Reverse Migration"

 

We can select default option to default the settings of the destination location for all the VMs or we can select it on an individual VM basis as well.

 

 

Usually on the cloud side we don't have much access to the resources so we will Select the Folder as "Workloads" Resource pool as "Compute-Resource-pool" and Data-store as "WorkloadDatastore" .

 

Remember to choose "Select Migration Type" as vMotion or Bulk Migration. vMotion will be a cold or hot Migration of the VM however Bulk Migration

 

- Will Create a copy of the VM at destination

- Run the replication for source and destination side VM

- Powers off the source VM

- Rename the Source VM

- Power On the Destination VM.

 

Once all the options are good, click next for the validation and then finish to start the migration.

 

In My lab I did a vMotion of a VM from on-prem to Cloud and then Bulk Migrated in back to the On-Prem. 

 

vMotion Completed and Bulk Migration replication in progress 

 

 

 

 

 Bulk Migration and vMotion both completed Successfully

 

 

 

 

 Highlighted is the renamed and powered off VM at source side after bulk migration

 

 

**whatever we have discussed is only for the first time deployment of HCX however if there is a need to connect second data-center to the VMware cloud on AWS, we will have to perform some extra steps, where in the Network section of the SDDC scroll down till "Public IPs" we will have to request some additional Public IPs and once those IPs are available will have to get in touch with VMware on AWS support for getting these IPs updated at the backend to the fleet pool . and then proceed with the deployment from step-1.

 

Deploy-and-configure-HCX-for-VMware-Cloud-on-AWS-Part--1

 

This is amazing that how simple it is to migrate VMs from onprem to VMware cloud on AWS and vice-versa. So in this 2 part blog post we have successfully deployed and tested the HCX Migration  and I hope it will help others in setting up and understanding the product..


 

 

 

 

 

 

 

 

 

Please reload

Contact

Follow