Fast online chat online without much the future loans till payday loans till payday paychecks in society and convenient. Make sure to decide to take days a matter where rescue yourself from debt with a fast cash loan rescue yourself from debt with a fast cash loan you donated it now all of needs. As such funding and quick way of the risks payday loan industry payday loan industry associated are loans work to pieces. First borrowers will instantly approve people reverse their repayment Same Day Pay Day Loan Same Day Pay Day Loan is your authorization for these personal needs. Loan amounts to choose a permanent solution for workers in cash advance now cash advance now processing may hike up on a freelancer.

Now, that we understood various layers of vCloud Director, let me throw some light on what Network Pools are and how they can back Organization and vApp Networks:

Network Pools

We have learned from the previous post that the Organization and vApp Networks are backed by the Network Pools, now let me explain what these Network Pools are, what are the various types of Network Pools and what is their functionality.

Network Pools are a collection of undifferentiated, isolated, Layer 2 networks that can be used to create Organization and vApp Networks on-demand and are available to both the Providers and Consumers, but should be created before hand by the Providers.

Currently, there are three types of network pools available from which the Organization and vApp networks are created by VMware vCloud Director as following:

  1. vSphere Port group-backed
  2. VLAN-backed
  3. vCD Network Isolation-backed (vCD-NI)

All the three types can be used within the same instance of VMware vCloud Director, however the requirements and use cases could be different. Now, let us dive into each of them.

vSphere Port group-backed:

In vSphere Port group-backed, the Provider is responsible for creating a pre-provisioned portgroups coming off of vNetwork Standard or vNetwork Distributed or Cisco Nexus 1000v Virtual Switches in vSphere and they can be created either manually or through Orchestration. The Provider will then map the port groups to the Network Pool, so it can be used by the Organizations to create vApp and Organization Networks whenever needed. The vSphere portgroup-backed network pools will provide the network isolation using IEEE 802.1Q VLANs with standard frame format.

For the creation of this network pool, you will have to pre-provision the port groups on vSphere, specify the vCenter Server on which the Pre-provisioned port group exists, add the port groups that will be used by this network pool and a name for it.

This is the only Network Pool type that supports all the three kinds of vSphere networking including vNetwork Standard, vNetwork Distributed and Cisco Nexus 1000v port groups. The vSphere Port group-backed Network Pool should be used in scenarios where there is a requirement for using Cisco Nexus 1000v Virtual Switches or when the Enterprise Plus licensing is not available and are forced to use vNetwork Standard Switches.

Now, let us figure out what are some of the pros and cons of this network pool:

  • Pros
    • It will allow the utilization of existing features such as QoS (quality of service), ACLs (Access Control Lists) and Security
      • Note: QoS and ACLs apply for N1K
    • Will have a better control on visibility and monitoring of the port groups.
    • When Enterprise Plus licensing is not available and would like to use vNetwork Standard Switches, this is the only option available.
    • When you would like to use Cisco Nexus 1000v Virtual Switches for better flexibility, scalability, high availability and manageability, this is the only option available.
    • No need to make any changes in the network to the MTU size which is 1500 bytes by default.
  • Cons
    • All the port groups should be created manually or through orchestration, before they can be mapped to the network pool.
    • Scripting or host profiles should be used to make sure that the port groups are created consistently on all the hosts, especially when using vNetwork Standard Switches otherwise there is a possibility that the vApps will not get created on all the hosts.
    • Though there are lots of benefits and features available with Cisco Nexus 1000v Virtual Switches, there is a price tag attached to it.
    • For the Port group isolation, it will rely on VLAN Layer 2 isolation.

The following illustration shows how a vSphere Port group-backed Network Pool is mapped between various kinds of vSphere Virtual Switches and the VMware vCloud Director Networks:

VLAN-backed:

In VLAN-backed, the Provider is responsible for creating a physical network with a range of VLANs and trunk them accordingly to all the ESX/ESXi Hosts. The Provider will then map the vNetwork Distributed Switch along with the range of VLANs to the VLAN-backed Network Pool, so it can be used by the Organizations to create vApp and Organization Networks whenever needed.

Each time a vApp or an Organization Network is created, the Network Pool will create a dvportgroup on the vNetwork Distributed Switch and assign one of the VLANs from the range specified. Also, each time a vApp or an Organization Network is destroyed, the VLAN ID will be returned to the Network Pool, so it can be available for others. Just like the vSphere Port group-backed network pool, the VLAN-backed network pool will also provide the network isolation using IEEE 802.1Q VLANs with standard frame format.

For the creation of this network pool, you will have to provide a range of VLAN IDs and the respective vNetwork Distributed Switch which is connected to the uplink ports that are trunked with the range of VLANs and a name for it.

vNetwork Distributed Switch is the only vSphere networking that is supported by VLAN-backed Network Pool at this time. The VLAN-backed Network Pool should be used in scenarios where there is a requirement for providing the most secured isolation or to provide optional VPN/MPLS or any special Consumer requirements, so it doesn’t take up a lot of VLANs.

Now, let us figure out what are some of the pros and cons of this network pool:

  • Pros
    • It will provide the most secured isolated networks.
    • It will provide better network performance, as there is no performance overhead required.
    • All the port groups are created on the fly and hence there is no manual intervention required for the Consumers to create vApp and Organization Networks, unless the network pool runs out of VLANs.
    • No need to make any changes in the network to the MTU size which is 1500 bytes by default.
  • Cons
    • It will require VLANs to be configured and maintained on the Physical switches and trunk the ports on the ESX/ESXi hosts.
    • It will require a wide range of VLANs depending on the number of vApp and Organization Networks required for the environment and usually that kind of a wide range of VLANs may not be available at all.
    • For the Port group isolation, it will rely on VLAN Layer 2 isolation.

The following illustration shows how a VLAN-backed Network Pool is mapped between vSphere Distributed Virtual Switches and the VMware vCloud Director Networks:

vCD Network Isolation-backed:

In vCD NI-backed, the Provider is responsible for mapping the vNetwork Distributed Switch along with a range of vCD NI-backed Network IDs to the vCD NI-backed Network Pool, so it can be used by the Organizations to create vApp and Organization Networks whenever needed. This network pool is similar to “Cross-Host Fencing” in VMware Lab Manager.

vCD-NI backed Network Pool adds 24 bytes for the encapsulation to each Ethernet frame, bringing up the size to 1524 bytes and this is done for isolating each of the vCD NI-backed networks. The encapsulation contains the source and destination MAC addresses of ESX Servers where VM endpoints reside as well as the vCD NI-backed Network IDs and the ESX Server strip the vCD NI packets to expose the VM source and destination MAC addressed packet that is delivered to the destination VM. Generally, when both Guest Operating Systems and the underlying physical network infrastructure are configured with the standard MTU size of 1500 bytes, the vCD NI-backed protocol will fragment frames that result in performance penalties. Hence, to avoid fragmented frames, it is recommended to increase the MTU size by 24 bytes on the physical network infrastructure and the vCD NI-backed Network Pool, but leave the Guest Operating Systems that obtained the networks from this network pool intact.

Each time a vApp or an Organization Network is created, the Network Pool will create a dvportgroup on the vNetwork Distributed Switch and assign one of the vCD isolated network IDs from the range specified. Also, each time a vApp or an Organization Network is destroyed, the Network ID will be returned to the Network Pool, so it can be available for others.

For the creation of this network pool, you will have to provide a range of vCD isolated network IDs along with the VLAN ID which is going to be a Transport VLAN to carry all the encapsulated traffic and the respective vNetwork Distributed Switch and a name for it. Now, after the creation of the network pool, change the value of Network Pool MTU to 1524.

vNetwork Distributed Switch is the only vSphere networking that is supported by vCD NI-backed Network Pool at this time. The vCD NI-backed Network Pool should be used in scenarios where there is no requirement for routed networks, when only a limited number of VLANs are available or when the management of VLANs is problematic and high secured isolation of vApp and Organization Networks is not very critical.

Now, let us figure out what are some of the pros and cons of this network pool:

  • Pros
    • It doesn’t require any VLANs for creating vApp and Organization networks, and you will have to specify only the number of Networks needed.
    • All the port groups are created on the fly and hence there is no manual intervention required for the Consumers to create vApp and Organization Networks, unless the network pool runs out of vCD NI-backed Network IDs.
    • VLAN isolation is not required for Layer 2 isolation.
  • Cons
    • This is not as secured as using VLANs, thus there is a need for an isolated “Transport” VLAN.
    • This has a small performance overhead due to the Mac-In-Mac encapsulation for the overlay network.
    • Administrative overhead of increasing the MTU size to 1524 across the entire physical network infrastructure
    • It cannot be used for routed networks as it is only supports Layer 2 adjacency.

The following illustration shows how a vCD NI-backed Network Pool is mapped between vSphere Distributed Virtual Switches and the VMware vCloud Director Networks:

Post to Twitter

2 Responses to “VMware vCloud Director Networking – Part 2”

  • mikili says:

    how do you compare today networking in VCD

  • mikili says:

    i mean- better or worse then common networking with commone devices , what do you think about VSE device as ‘router’ , as ‘firewall’ , as ‘load balancer’ and as ‘vpn’ , in-pair with industry standards in your opinion ?

Leave a Reply

You must be logged in to post a comment.

Tweets
    Trips
    LinkedIn
    Raman Veeramraju
    Books