Tuesday, 6 August 2013

vCD without vCNS


One of the most interesting solutions I have been working on recently is a vCD project where the customer has decided to use an alternative to vCNS (vCloud Networking and security).  I can not name the product they want to use as an alternative but for the discussion inside this blog I think the principle applies to most third party products.  

The first thing I asked the customer was why do you not want to use vCNS? The answer was quite simply that there security department doesn't see the vCNS firewall as secure. When asking for a bit more detail we were not provided with anything credible in my view.  One of the main problems they communicated as an issue is that the appliances are deployed on demand and are all applied with the same default password. (this can be changed after the appliance is deployed)  Anyhow this is not the aim of my blog, to detail my customer problems, and so we were asked to use an virtual firewall that they already use. 

Now the first thing I needed to comunicate to the customer we that we HAVE to install vCNS manager, this is a requirement of vCD and all network related instructions are sent to vCNS. This includes port group creations for vCD networks.  I also needed to point out to the customer we will be using VXLAN and vCNS is used in securing multitenancy within the VXLAN.

So once we had cleared all the above and it was agreed that there was no way we could push the customer back to using vCNS we looked at the use cases and the limitations that would be imposed. 

First lets break down what not using vCNS removed from vCD.
  1. Limits the use of OrgvDC networks, we can't create a routed or an isolated OrgvDC network with IP addressing ranges specified. - When creating a network with an IP range  a vCNS appliance is applied to the networks corresponding dVportGroup.  This is to assign DHCP. 
  2. We can't create any vApp networks with routed connections.
  3. Networks and network services configuration is no longer made visible to vCD. This restricts what we can do for network service creation when using vCD. 
  4. No vCO plugin for any third party software defined networking service. 
  5. No Charge Back monitoring for any Network elements, usage of bandwidth, IP addresses and network services such as firewall rules and NAT rules.
So how did I design around this?

  1. The customer was given two options of how to manage IP addressing inside the vCD networks. a) Was to use there third party firewall appliance, b) Was to use a vCO workflow to populate the IP address of the VM's in the vApp from there IPAM system.
  2. We had to place an extra VM in the vApp.  This extra VM is the firewall VM and because vCD just thinks this is another VM it creates it in the users OrgvDC and uses the resource provisioned to the user/Organization.  If this was vCNS the appliances are provisioned in the SYSTEM resource pool and do not impact the users resource allocation.
  3. The IPAM system and the firewall VM has no integration with vCD and this particular firewall vendors API is not very good. So using vCO to talk to it is a challange.  This gives the customer another system to monitor and manage as we cant pull any information from either of these systems.
  4. We can try and develop a vCO plugin, most of the plugin's are developed by VMware but the limitation of this firewall vendors API presents a big challenge and my customer is not going to pay for a plugin development.
  5. We can add a fixed cost for the VM in CBM and charge a fixed amount each month.  We can monitor bandwidth but not in the same way we can with a vCNS appliance. 
This is becoming more and more common and the above are some small use cases for my customer.  Other customers may have more risks and may also have more constraints and requirements of vCNS and/or a third party firewall appliance. 

With the release of NSX in the not so distant future I hope people start to see VMware network virtualization as a more credible technology.

So if you get asked this question I hope this post can give you some food for thought on the impact and the design considerations/constraints it will impose.


Monday, 5 August 2013

vCloud Director - Moving vApps Between Clusters In the Same PvDC



 I had a request from a customer of mine to be able to move there vApps and VM's between vSphere clusters in the same vCD Provider Virtual datacenter.  The customers first thoughts were to drop down into the vCenter and then just do a normal vMotion, however they get the standard message detailing that this object is managed by vCD.


This should have made the customer think that maybe doing it this way was not the best idea, But they progressed and migrated the VM anyway.  Once they had done this.  It broke the relationship with vCD and the VM became unmanageable in vCD.

Now if this was a simple migrate a VM/vApp between clusters using the same storage we could go to System>Manage And Monitor> Resource Pools and select the VM from source cluster and the select "Migrate" to move the VM to the destination cluster. 



Now the customer I was working with has been using VBLOCK's in there datacenter to build there vCloud environment.  This is brilliant as the VBLOCK is a very impressive bit of kit and after working with them for a number of months I am very happy with recommending them to customers.  But using a VBLOCK did present us with a challenge.

The customer was using a single provider virtual datacenter in vCD.  This was backed by several clusters each cluster corresponded to a single VBLOCK.  The Clusters were 24-32 hosts.  Each VBLOCK has its own VNX SAN and the storage from this is only presented to the hosts in the same VBLOCK.  There is no storage shred between the VBLOCKs, and thus between the clusters.  

So if we have no shared storage between the clusters backing the Provider Virtual Datacenter. How do we move VMs between the clusters.  The answer is using Storage vMotion! Hang On Storage vMotion is not a selectable option in vCD.

After thinking a little more I decided we could select the VMs storage profile and change this.  This should instruct the VM to conduct a joint "Change Host and Change datastore" migration.

In the example below I have my vApp built on a cluster called "Site1".  This cluster has a corresponding storage profile.  The storage for this profile is only mounted on the hosts in the cluster "Site1"

Now the vApp and the VM must be completely shutdown.  If the vApp is showing a status of "Partially Running" then this operation will fail.  The same applies for if the VM is not shutdown.  You will see an error message of "Invalid Parameter" 

Select a host profile that is linked to another cluster in the Provider Virtual Datacenter.  In my lab this is "Site2" and select OK.

Now the VM will show as being "Busy" while the Storage vMotion Is conducted in the background.  If you drop to the vSphere Client you will see more information. The vSphere client will show the VM as being relocated.
 
Once the Storage vMotion has completed you will see the VM in vCD show as normal and examining the vSphere client will show the VM as being moved to another cluster.

This resolved my customers problem, and we integrated this into a rather cool vCO workflow that could be kicked off from the customers Cisco based cloud portal.  The workflow I may detail in another post but it basically looked at the VMs inside a vApp and then changed there storage profile to relocate them based on the input from the user.

My customer then requested that this activity of moving between the clusters was conducted with no downtime.  The ONLY way this is possible is to use a SWING datastore.  This is a datastore that is presented to all the hosts in the Provider.  This breaks a number of VCE design constraints and would result in having to have a datastore equal to the size of the largest VM.  In my customers case this is 8TB.  So this 8TB is going to be sitting there doing nothing most of the time, as a VCE design constraint is not to share storage between VBLOCK's let alone run workloads on it.

I am working to resolve this at the moment but I am not sure I will find a resolution. We are constrained by the CBT technology used in Storage vMotion and the design constraints of VCE.