Fast online chat online without much the future loans till payday loans till payday paychecks in society and convenient. Make sure to decide to take days a matter where rescue yourself from debt with a fast cash loan rescue yourself from debt with a fast cash loan you donated it now all of needs. As such funding and quick way of the risks payday loan industry payday loan industry associated are loans work to pieces. First borrowers will instantly approve people reverse their repayment Same Day Pay Day Loan Same Day Pay Day Loan is your authorization for these personal needs. Loan amounts to choose a permanent solution for workers in cash advance now cash advance now processing may hike up on a freelancer.

Archive for the ‘Cloud’ Category

Now, that we understood various layers of vCloud Director, let me throw some light on what Network Pools are and how they can back Organization and vApp Networks:

Network Pools

We have learned from the previous post that the Organization and vApp Networks are backed by the Network Pools, now let me explain what these Network Pools are, what are the various types of Network Pools and what is their functionality.

Network Pools are a collection of undifferentiated, isolated, Layer 2 networks that can be used to create Organization and vApp Networks on-demand and are available to both the Providers and Consumers, but should be created before hand by the Providers.

Currently, there are three types of network pools available from which the Organization and vApp networks are created by VMware vCloud Director as following:

  1. vSphere Port group-backed
  2. VLAN-backed
  3. vCD Network Isolation-backed (vCD-NI)

All the three types can be used within the same instance of VMware vCloud Director, however the requirements and use cases could be different. Now, let us dive into each of them.

vSphere Port group-backed:

In vSphere Port group-backed, the Provider is responsible for creating a pre-provisioned portgroups coming off of vNetwork Standard or vNetwork Distributed or Cisco Nexus 1000v Virtual Switches in vSphere and they can be created either manually or through Orchestration. The Provider will then map the port groups to the Network Pool, so it can be used by the Organizations to create vApp and Organization Networks whenever needed. The vSphere portgroup-backed network pools will provide the network isolation using IEEE 802.1Q VLANs with standard frame format.

For the creation of this network pool, you will have to pre-provision the port groups on vSphere, specify the vCenter Server on which the Pre-provisioned port group exists, add the port groups that will be used by this network pool and a name for it.

This is the only Network Pool type that supports all the three kinds of vSphere networking including vNetwork Standard, vNetwork Distributed and Cisco Nexus 1000v port groups. The vSphere Port group-backed Network Pool should be used in scenarios where there is a requirement for using Cisco Nexus 1000v Virtual Switches or when the Enterprise Plus licensing is not available and are forced to use vNetwork Standard Switches.

Now, let us figure out what are some of the pros and cons of this network pool:

  • Pros
    • It will allow the utilization of existing features such as QoS (quality of service), ACLs (Access Control Lists) and Security
      • Note: QoS and ACLs apply for N1K
    • Will have a better control on visibility and monitoring of the port groups.
    • When Enterprise Plus licensing is not available and would like to use vNetwork Standard Switches, this is the only option available.
    • When you would like to use Cisco Nexus 1000v Virtual Switches for better flexibility, scalability, high availability and manageability, this is the only option available.
    • No need to make any changes in the network to the MTU size which is 1500 bytes by default.
  • Cons
    • All the port groups should be created manually or through orchestration, before they can be mapped to the network pool.
    • Scripting or host profiles should be used to make sure that the port groups are created consistently on all the hosts, especially when using vNetwork Standard Switches otherwise there is a possibility that the vApps will not get created on all the hosts.
    • Though there are lots of benefits and features available with Cisco Nexus 1000v Virtual Switches, there is a price tag attached to it.
    • For the Port group isolation, it will rely on VLAN Layer 2 isolation.

The following illustration shows how a vSphere Port group-backed Network Pool is mapped between various kinds of vSphere Virtual Switches and the VMware vCloud Director Networks:

VLAN-backed:

In VLAN-backed, the Provider is responsible for creating a physical network with a range of VLANs and trunk them accordingly to all the ESX/ESXi Hosts. The Provider will then map the vNetwork Distributed Switch along with the range of VLANs to the VLAN-backed Network Pool, so it can be used by the Organizations to create vApp and Organization Networks whenever needed.

Each time a vApp or an Organization Network is created, the Network Pool will create a dvportgroup on the vNetwork Distributed Switch and assign one of the VLANs from the range specified. Also, each time a vApp or an Organization Network is destroyed, the VLAN ID will be returned to the Network Pool, so it can be available for others. Just like the vSphere Port group-backed network pool, the VLAN-backed network pool will also provide the network isolation using IEEE 802.1Q VLANs with standard frame format.

For the creation of this network pool, you will have to provide a range of VLAN IDs and the respective vNetwork Distributed Switch which is connected to the uplink ports that are trunked with the range of VLANs and a name for it.

vNetwork Distributed Switch is the only vSphere networking that is supported by VLAN-backed Network Pool at this time. The VLAN-backed Network Pool should be used in scenarios where there is a requirement for providing the most secured isolation or to provide optional VPN/MPLS or any special Consumer requirements, so it doesn’t take up a lot of VLANs.

Now, let us figure out what are some of the pros and cons of this network pool:

  • Pros
    • It will provide the most secured isolated networks.
    • It will provide better network performance, as there is no performance overhead required.
    • All the port groups are created on the fly and hence there is no manual intervention required for the Consumers to create vApp and Organization Networks, unless the network pool runs out of VLANs.
    • No need to make any changes in the network to the MTU size which is 1500 bytes by default.
  • Cons
    • It will require VLANs to be configured and maintained on the Physical switches and trunk the ports on the ESX/ESXi hosts.
    • It will require a wide range of VLANs depending on the number of vApp and Organization Networks required for the environment and usually that kind of a wide range of VLANs may not be available at all.
    • For the Port group isolation, it will rely on VLAN Layer 2 isolation.

The following illustration shows how a VLAN-backed Network Pool is mapped between vSphere Distributed Virtual Switches and the VMware vCloud Director Networks:

vCD Network Isolation-backed:

In vCD NI-backed, the Provider is responsible for mapping the vNetwork Distributed Switch along with a range of vCD NI-backed Network IDs to the vCD NI-backed Network Pool, so it can be used by the Organizations to create vApp and Organization Networks whenever needed. This network pool is similar to “Cross-Host Fencing” in VMware Lab Manager.

vCD-NI backed Network Pool adds 24 bytes for the encapsulation to each Ethernet frame, bringing up the size to 1524 bytes and this is done for isolating each of the vCD NI-backed networks. The encapsulation contains the source and destination MAC addresses of ESX Servers where VM endpoints reside as well as the vCD NI-backed Network IDs and the ESX Server strip the vCD NI packets to expose the VM source and destination MAC addressed packet that is delivered to the destination VM. Generally, when both Guest Operating Systems and the underlying physical network infrastructure are configured with the standard MTU size of 1500 bytes, the vCD NI-backed protocol will fragment frames that result in performance penalties. Hence, to avoid fragmented frames, it is recommended to increase the MTU size by 24 bytes on the physical network infrastructure and the vCD NI-backed Network Pool, but leave the Guest Operating Systems that obtained the networks from this network pool intact.

Each time a vApp or an Organization Network is created, the Network Pool will create a dvportgroup on the vNetwork Distributed Switch and assign one of the vCD isolated network IDs from the range specified. Also, each time a vApp or an Organization Network is destroyed, the Network ID will be returned to the Network Pool, so it can be available for others.

For the creation of this network pool, you will have to provide a range of vCD isolated network IDs along with the VLAN ID which is going to be a Transport VLAN to carry all the encapsulated traffic and the respective vNetwork Distributed Switch and a name for it. Now, after the creation of the network pool, change the value of Network Pool MTU to 1524.

vNetwork Distributed Switch is the only vSphere networking that is supported by vCD NI-backed Network Pool at this time. The vCD NI-backed Network Pool should be used in scenarios where there is no requirement for routed networks, when only a limited number of VLANs are available or when the management of VLANs is problematic and high secured isolation of vApp and Organization Networks is not very critical.

Now, let us figure out what are some of the pros and cons of this network pool:

  • Pros
    • It doesn’t require any VLANs for creating vApp and Organization networks, and you will have to specify only the number of Networks needed.
    • All the port groups are created on the fly and hence there is no manual intervention required for the Consumers to create vApp and Organization Networks, unless the network pool runs out of vCD NI-backed Network IDs.
    • VLAN isolation is not required for Layer 2 isolation.
  • Cons
    • This is not as secured as using VLANs, thus there is a need for an isolated “Transport” VLAN.
    • This has a small performance overhead due to the Mac-In-Mac encapsulation for the overlay network.
    • Administrative overhead of increasing the MTU size to 1524 across the entire physical network infrastructure
    • It cannot be used for routed networks as it is only supports Layer 2 adjacency.

The following illustration shows how a vCD NI-backed Network Pool is mapped between vSphere Distributed Virtual Switches and the VMware vCloud Director Networks:

Post to Twitter

Networking is the most complicated topics in VMware vCloud Director and it is very critical to understand the ins and outs of it, as it touches every Virtual Machine, vApp and Organization of your deployment. In this chapter I will introduce you to the various layers of VMware vCloud Director Networking, their abstraction from the vSphere Layer, their functionality, their interaction with each other, and various use cases that can be applied.

Firstly, I would like to explain how vSphere networking is designed around VMware vNetwork Standard Switches, VMware vNetwork Distributed Switches and Cisco Nexus 1000v Virtual Switches and vmnics. All of these vSphere networking resources are abstracted from the hardware resources such as Physical Switches and Network Interface Cards on vSphere hosts.

VMware vCloud Director is an abstraction from vSphere layer and the same thing applies to the networking as well. So, here the vCloud Layer is abstracting the networking resources from the vSwitches/Port groups and/or dvSwitches/dvPort groups of the vSphere Layer.

Here is an illustration of how the various networking abstractions are done:

vCloud Network Layers

The three layers of networking available in VMware vCloud Director are:

  1. External Networks
  2. Organization Networks
  3. vApp Networks

Cloud is all about providing and consuming, where the providers such as Cloud Computing Service Providers or Enterprises that sell the resources to Consumers such as IT Organizations or Internal Divisions of an Enterprise (for instance, Finance department).

Similarly, in the case of vCloud networking, External and Organization networks are created and managed by Providers, where as Consumers can use those resources using the vApp networks that they can create either manually or automatically.

Now, let me explain each of the layers and their functionalities:

External Networks:

External Networks also known as “Provided Networks” are always created by the Providers and they provide external connectivity to the VMware vCloud Director i.e., they are the doors of vCloud to the outside world. Typically they are created by mapping a dvPort group or Port group coming off of a vNetwork Standard or vNetwork Distributed or Cisco Nexus 1000V Virtual Switch at the VMware vSphere layer.

Here are some of the typical use cases for External Networks:

  • Internet Access
  • Provider supplied network endpoints such as:
    • IP based storage
    • Online or offline backup services
    • Backhauled networking services for consumers such as:
      • VPN access to a private cloud
      • MPLS termination

The following illustration shows how an External Network can be used as a gateway to VMware vCloud Director for providing various services mentioned above:

While providing External networks such as Internet, typically the Providers will cater public IP Addresses to the consumers both for inbound and outbound access. While it is possible to create one large External Network and provide it to all the consumers, it is quiet challenging to create and maintain the public IP addresses in one big IP range. Hence, it is recommend creating multiple External Networks at least one per Organization, so the public IP address range can be kept separate for each consumer and can be maintained easily while keeping the multi-tenancy intact.

Organization Networks:

Organization networks are also created by the Providers and are contained within the Organizations, where Organizations are the logical constructs of consumers. The main purpose of them is to connect multiple vApps to communicate with each other and provide connectivity of the vApps to the external world by connecting to the External Networks. In other words, Organization Networks bridge the vApps and the External Networks.

Organization Networks are provisioned from a set of pre-configured network resources called Network Pools, which typically maps a Port group or dvPort group coming off of a vNetwork Standard or vNetwork Distributed or Cisco Nexus 1000V Virtual Switch at the VMware vSphere layer. I will cover the Network Pools in my next post.

The Organization Networks can be connected to the External Networks in three different ways:

  • Public or Direct Connectivity: An Organization Network is bridged directly to an External Network, where the deployed vApps are directly connected to the External Network.
  • Private or External NAT/Routed Connectivity: An Organization Network is NAT/Routed to an External Network, where the deployed vApps are connected to the External Network via a vShield Edge that provide Firewall and/or NATing functionality to provide security.
  • Private or Isolated or Internal Connectivity: This is very similar to External or Private NAT/Routed connectivity, except that the Organization Network is not connected to the External Network and is completely isolated within the Organization.

Now, here are some of the typical use cases for the Organization Networks:

  • Consumers that need access to their backhauled networking services via a trusted External Network can be direct connected to External Network
  • Consumers that need access to the Internet via a non-trusted External Network can be NAT/Routed connected to the External Network
  • Consumers that do not need any access to the public networks can use a Private or Isolated or Internal connected Organization Network that is contained within itself.

The following illustration shows how an Organization Network will act as a bridge between vApps and External Networks:

vApp Networks:

vApp networks are created by the Consumers and are contained within the vApps, where vApp is a logical entity comprising of one or more virtual machines. The main purpose of the vApp Networks is to connect multiple Virtual Machines in a vApp to communicate with each other.

vApp Networks are also provisioned from a set of pre-configured network resources called Network Pools, which typically maps a Port group or dvPort group coming off of a vNetwork Standard or vNetwork Distributed or Cisco Nexus 1000V Virtual Switch at the VMware vSphere layer. I will cover the Network Pools in my next post.

The vApp Networks can be connected to the Organization Networks in three different ways:

  • Direct Connectivity: A vApp Network is bridged directly to an Organization Network, where the deployed VMs are directly connected to the Organization Network.
  • Fenced Connectivity: A vApp Network is NAT/Routed to an Organization Network, where the deployed VMs are connected to the Organization Network via a vShield Edge that provide Firewall and/or NATing functionality to provide security.
  • Isolated Connectivity: A vApp Network is completely isolated from the other vApps and the Organization Network. This is similar to Isolated Organization Network except that this is isolated only between the VMs in the vApp.

Now, here are some of the typical use cases for the vApp Networks:

  • Consumers that need to communicate to the VMs in other vApps within the same Organization and with the same security requirements can be direct connected to the Organization Network.
  • Consumers that need to communicate to the VMs in other vApps within the same Organization, but with different security requirements can be NAT/Routed connected to the Organization Network. For instance, Production vApps and DMZ vApps within the same Organization need to communicate to each other but through a firewall.
  • Consumers that do not need to communicate to the VMs in other vApps can be isolated from the Organization Network.

The following illustration shows how a vApp Network can be either isolated or connected to the Organization Network:

Post to Twitter

Before we know about VMware vCloud Director, let us quickly see what Cloud Computing is. So, what is cloud computing? That’s the new buzz word in the market, right? Well, does it really mean anything to us? Some say that it is a new way of saying what we have been offering as a Virtualization service, is that right? When somebody asks me the same question I just reply with a face and say “You talkin’ to me?”, no I am just kidding. Here is how “I” see at it:

Let us break the cloud computing into two different words, define them, combine them and then see what they mean together: We have always used “cloud” as somekind of network such as Internet or WAN or VPN that is out there. And we all know that computing is some kind of hardware and/or software that can be used to access, process or manage information. Now, if we combine them together we are really talking about accessing, processing and managing information over some kind of private or public network. There are several other names in the market that revolve around cloud computing. For instance, when cloud computing is used for delivering Application services, it can be called as “Software as a Service (SaaS)”, when is it used for delivering Platform services, it can be called as “Platform as a Service (PaaS)” and when used for delivering Infrastructure services, it can be called as “Infrastructure as a Service (IaaS)”.

VMware vCloud Director is an “Infrastructure as a Service” solution that can pool the VMware vSphere resources in your existing datacenter and deliver them on a catalog basis without the end-users knowing the complexities of the Infrastructure behind it. It is elastic in nature and provides consumption based pricing and can be accessed over the Internet using standard protocols. Now, think of it as a layer sitting above the VMware vSphere layer to transparently provide resources to the end-users just as shown below:

As you can see from the bottom up, historically VMware vSphere Components abstracted the Physical resources into virtual resources and now VMware vCloud Director Components will abstract the VMware vSphere virtual resources into “Pure Virtual” resources. When I say pure virtual resources, I am referring to the Virtual Computing, networking and storage resources to be more specific. With that said, let us see what are the various VMware vCloud Director Components that are making this happen:

  1. VMware vCloud Director Cells: These are multiple VMware vCloud Director software components installed on RedHat Enterprise Linux and are stateless peers that use a single VMware vCloud Director database but can scale horizontally. Multiple cells provide redundandy and load balancing when used with an external load balancer. Every cell has several roles such as UI, vCloud API, Console Proxy, VMware Remote Console (VMRC), Image Transfer and so on, however Console Proxy and VMRC are the configurable and critical components where Console proxy provides self-service VMware vCloud Director portal access to the administrators and end-users and the VMRC provides Virtual Machine Remote Console access to both administrators and end-users.
  2. VMware vCloud Director Database: This is an Oracle database that stores all the VMware vCloud Director information. Care should be taken to design the database with redundancy and high availability. Currently, Oracle 10g Enterprise Server and above is the only database type that is supported.
  3. VMware vShield Manager and Edge Components: VMware vShield Manager is used to manage all the vShield service VMs such as vShield Edges that will be created on the fly whenever fencing, NATing and other services are used within the VMware vCloud Director environment.
  4. VMware vCenter Chargeback: vCenter Chargeback provides the software metering to the VMware vCloud Director environment that can be used to bill the end-users. It runs on an Apache Tomcat server instance and provide built-in load balancing when used with multiple vCenter Chargeback Servers. Chargeback also contain Data collectors including one for vCloud Director and one for vShield components that are responsible for collecting the information specific to the multi-tenant VMware vCloud Director environment.
  5. And of course VMware vSphere Components: VMware vCloud Director is sitting on top of VMware vSphere layer and works with vCenter Server and ESX/ESXi Hosts to provide private and public computing resources.

 

Post to Twitter

Tweets
    Trips
    LinkedIn
    Raman Veeramraju
    Books