VMWare vSphere Distributed Switch – Cisco Nexus 1000V

Last time we talked about one type of vDS, a VMWare vDS. We saw why would we use vDS instead of vSS and how to install a vDS and migrate our VMs. This time we will talk about another version of vDS – Cisco Nexus 1000V.

Why would we need a Nexus? It is, by all means, more complicated to install and setup and VMWare’s vDS mostly does the job. There could be several reasons for this, and I will name only one: a role separation. Without a role separation, we have a situation that network admins and virtual infrastructure admins collide. This collision occurs within a vSS or (VMWare) vDS. Who is responsible of configuring and managing switching? If you ask network admins they will say – we are, this is after all networking. On the other hand, VI admins will also say – we are, we don’t like some “cable guys” doing stuff within our VI. There we have a conflict. What we achieve with Nexus is this responsibility separation: all networking stuff is done by network admins, and all the VI guys need to do is to “plug” a VM into assigned port. Good or bad? Depends on how do you ask Smile

So, what is this Nexus? Like I said it’s a vDS for VMWare hypervisor. It is an interface between a virtual and physical world. It has two major components:

  • VSM (Virtual Supervisor Module) – this acts as a traditional switching supervisor. This component is installed inside the virtual infrastructure on the datacenter level in the form of virtual machine. There could be one or two of these, depending on whether we want a high availability or not. Is this a question somebody is going to ask Smile
  • VEM (Virtual Ethernet Module) – this is like line card in physical switches. VEMs are installed as a plugin or agent on all ESXi hosts we want to participate in this vDS. These VEMs conduct what is configured by network admins on VSM.

This illustration from Cisco can help understand the concept.

image

As we can see, we have agents (VEMs) on ESXi hosts that act as line cards and, together with supervisor (VSM), make a virtual switch that serves a specific datacenter.

We will continue from where we left off in previous blog describing a VMWare vDS. I would break the installation into following steps:

    1. Prerequisites fulfillment
    2. Installing VSM
    3. Setting up VSM(s)
    4. Adding VEMs
    5. Migrating VMs

Step A: Prerequisites fulfillment

Let’s assume we have hardware and software infrastructure in place, valid licenses and have Nexus 1000V package downloaded from Cisco’s site. Along with this, we need to create two or three VLANs/port groups in our VI world, as well as in physical one. One VLAN is called a “Control VLAN” (although the name within VI can be anything) and is used as a channel between VSMs and between VSMs and VEMs. Second VLAN is called “Packet VLAN” and is used for example for CDP packets. To be honest, I’m not sure why this CDP (and some other protocols) communication is put in separate VLAN. If we install Nexus in L2 mode, which is the case in this blog, there must be a layer 2 connectivity between ESXi hosts that will participate in this vDS. This is so that just described communication could happen. In addition to these VLANs, a third must exist for management purposes. This one is called “Management VLAN” and we already have this one in place because we already manage our existing VI.

So, let’s create these port groups in our vSS or vDS. Because we have a vDS from the previous blog, we will create control and packet VLANs there. This procedures is also described in previous blog. Management VLAN we already have. First, a Control VLAN:

SNAGHTML96427afe

Then a Packet VLAN:

SNAGHTML9643b031

Now we have our three VLANs inside vDS:

image

Red-squared are Control and Packet VLANs, and green-squared is our Management VLAN we had earlier.

We need to make sure that these VLANs are allowed on Cisco physical switch’s across the trunk link:

!
interface GigabitEthernet1/0/5
switchport trunk native vlan 99
switchport trunk allowed vlan 10,12,112,141-143
switchport mode trunk
spanning-tree portfast trunk
end

This config should be similar on all trunk links towards Cisco switches.

Step B: Installing VSM

This step is huge one and can be broken into smaller steps:

      1. Installing VSM
      2. Registering Nexus 1000V vCenter plugin
      3. Connecting Nexus to vCenter
      4. Creating port profiles
      5. Adding VEMs
      6. Setting up HA

Lot’s to do…

Step Bi: Installing VSM

As far as I can tell, there are three ways of doing this: manual install, deploying an OVF/OVA template and using Java helper application.

Manual install requires creating a VM with required parameters, such as CPU, memory, disk and other resources with needed reservations. Then we mount an ISO image and start a process. This is what we won’t do.

Using Java helper application. Sounds good, but I never managed to do an installation using this method. I was failing when choosing a data store on which VSM should be installed. Java kept returning some “null” error. I guess I could expect more errors in next steps. So I guess we will not use this method either.

Deploying and OVF/OVA template seems ok. We click “File->Deploy OVF Template…” and go through the wizard.

First we select an OVA file:

SNAGHTML96564c03

Then we can review how VSM virtual machine is going to look like:

SNAGHTML9657d2bf

The step in which we accept the End User License Agreement is not shown here. After that step, we choose a virtual machine name. This is not the name of a switch itself. We will name this one “Nexus100oV-PRI” because we are going to have another one called “Nexus1000V-SEC” as a HA pair member. Because Nexus will attach itself at a datacenter level, we have option to select a datacenter. In our case we actually don’t have an option, because we have only one datacenter:

SNAGHTML965cbb39

Out of three following options, we are going to use “Manually Configure Nexus 1000V”. This way we get to understand the process most:

SNAGHTML965ecb6f

In the next step we chose a data store. We could use local or SAN based store. Because we will have additional VSM in a HA pair, I guess that local storage will do:

SNAGHTML96613661

On the next screen, we should leave options as presented. The VSM has to be on a thick provisioned disk:

SNAGHTML9662d56d

Now we need to select previously created VLANs. It is recommended to separate these three functions – Control, Packet and Management:

SNAGHTML96658126[7]

All of options presented on the next screen we can omit for now, because we are doing manual installation. If we had chosen “Nexus 1000V Installer” option on the “Deployment Configuration” screen, we would fill these in now:

SNAGHTML966a1b72

On the “Properties” screen we can select “Power on after deployment” and click “Finish”. This will begin a process of deploying and OVF package. The process is rather quick:

SNAGHTML966d27d1

As we can see, we now have first VSM deployed and ready to be powered on and configured:

SNAGHTML9685e46c

Let’s power it on and continue with our setup. This is how initial settings look like:

SNAGHTMLa0770577

Two thing are important here:

  • The HA role, in this case primary, because this is the first VSM
  • Domain ID, this differentiate multiple Nexus installations, if we had more than one.

There are additional steps in configuring this VSM:

SNAGHTML9691548c

Please note the switch name. This is the name that will be listed inside the vCenter. Management interface’s IP address should be routable across the enterprise, so that network admins could reach this virtual switch via telnet/ssh. In some point of this console installation we will get the opportunity to setup SVS domain parameters, such as L2/L3 deployment mode, Control and Packet VLANs. We select L2 mode with appropriate VLANs. Finally, a summary screen will notify us about initial configuration:

SNAGHTML96991848

At this time we should be able to do a test SSH connection to our Nexus switch and proceed with the next step.

Step Bii: Registering Nexus 1000V vCenter plugin

During the installation of VSM we should enable HTTP server. Why? Because we need to register something called “Extension Key” as a vSphere plugin. This XML file based key contains, among some unimportant things, a certificate required for encrypted communication between vCenter and VSM. This XML file can be downloaded from HTTP site: “https://VSM-Mgmt-Interface-IPAddress”:

SNAGHTML96a5db68

We download this XML file and register it as an extension in the vCenter.
We click “Plug-ins –> Manage Plug-ins”, right click somewhere on white space and select “New Plug-in”:

SNAGHTML96a92f22

Please observe that I already have one plug-in registered from my previous installation. This one is not valid for this installation, so we continue with the import wizard. We click “Browse”, select the XML file, view its contents and click “Register Plug-in”. We ignore a security warning dialog presented by clicking “Ignore”. We have now successfully registered this plug-in:

SNAGHTML96accab5

SNAGHTML96add530

Step Biii: Connecting Nexus to vCenter

This step will be executed from SSH console on the VSM.  We log in with username of “admin” and password we specified during the installation. Then we set up a connection to vCenter in two step process:

Nexus1KV-SPOP#
Nexus1KV-SPOP# conf t
Enter configuration commands, one per line.  End with CNTL/Z.
Nexus1KV-SPOP(config)# svs-domain
Nexus1KV-SPOP(config-svs-domain)# domain id 1
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
Nexus1KV-SPOP(config-svs-domain)# control vlan 141
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
Nexus1KV-SPOP(config-svs-domain)# packet vlan 143
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
Nexus1KV-SPOP(config-svs-domain)# svs mode L2
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
Nexus1KV-SPOP(config-svs-domain)# exit
Nexus1KV-SPOP(config)# exit
Nexus1KV-SPOP#

Here we specify the domain ID, Control/Packet VLANs, as well as mode of operation. Now we set up actual connection parameters:

Nexus1KV-SPOP#
Nexus1KV-SPOP# conf t
Enter configuration commands, one per line.  End with CNTL/Z.
Nexus1KV-SPOP(config)# svs connection vSpop
Nexus1KV-SPOP(config-svs-conn)# protocol vmware-vim
Nexus1KV-SPOP(config-svs-conn)# vmware dvs datacenter-name SPOP
Nexus1KV-SPOP(config-svs-conn)# remote ip address 10.x.y.z   ! vCenter IP address
Nexus1KV-SPOP(config-svs-conn)# connect
Nexus1KV-SPOP(config-svs-conn)# exit
Nexus1KV-SPOP(config)# exit
Nexus1KV-SPOP#

Here we specified a connection name, protocol used for communication, datacenter name and the IP address of vCenter we would like this Nexus to connect to. Finally we issued a “connect” statement which tries to connect to vCenter.

This connection attempt can be observed in the vCenter “Recent Tasks”:

SNAGHTML96b8d6c6

And we indeed have a new Nexus 1000V vDS:

SNAGHTML96ba3154

We can verify this on the Nexus as well and see that this VSM is connected to vCenter:

Nexus1KV-SPOP#
Nexus1KV-SPOP# show svs connections vSpop

connection vSpop:
ip address: 10.x.y.z
remote port: 80
protocol: vmware-vim https
certificate: default
datacenter name: SPOP
admin:
max-ports: 8192
DVS uuid: dc 36 2d 50 96 6e 07 4c-ab 29 f2 26 e8 d3 c5 81
config status: Enabled
operational status: Connected
sync status: Complete
version: VMware vCenter Server 5.1.0 build-799731
vc-uuid: D4B879A2-5E57-45A0-9864-2B1A6277D6C8
Nexus1KV-SPOP#

Perhaps it’s time to save our config and take a short coffee break Smile

Step Biv: Creating port profiles

At this time we can distinct two different port profiles:

  • Ethernet port profiles – these include physical NICs and are used to connect to physical world
  • vEthernet port profiles – used for connecting VMs to this vDS

Let’s create both… First Ethernet port profile:

Nexus1KV-SPOP#
Nexus1KV-SPOP# conf t
Enter configuration commands, one per line.  End with CNTL/Z.
Nexus1KV-SPOP(config)# port-profile type ethernet UPLINK-PORT-PROFILE
Nexus1KV-SPOP(config-port-prof)# switchport mode trunk
Nexus1KV-SPOP(config-port-prof)# switchport trunk native vlan 99
Nexus1KV-SPOP(config-port-prof)# switchport trunk allowed vlan 10,12,112,141-143
Nexus1KV-SPOP(config-port-prof)# system vlan 141,143
Nexus1KV-SPOP(config-port-prof)# no shutdown
Nexus1KV-SPOP(config-port-prof)# vmware port-group
Nexus1KV-SPOP(config-port-prof)# state enabled
Nexus1KV-SPOP(config-port-prof)# end
Nexus1KV-SPOP#

More or less this config snippet should look familiar except the command by which we identify the Control and Port VLANs and specify that this port group should be registered within the vCenter. After issuing “state enabled”, we could see that this port group is created in vCenter.

SNAGHTML96c86f8c

Once again, let’s not forget to configure appropriate physical port(s) on Cisco switch(es) that are going to be used as uplink or trunk ports.

Now we can create a vEthernet port profile:

Nexus1KV-SPOP#
Nexus1KV-SPOP# conf t
Enter configuration commands, one per line.  End with CNTL/Z.
Nexus1KV-SPOP(config)# port-profile type vethernet VLAN12
Nexus1KV-SPOP(config-port-prof)# switchport mode access
Nexus1KV-SPOP(config-port-prof)# switchport access vlan 12
Nexus1KV-SPOP(config-port-prof)# no shutdown
Nexus1KV-SPOP(config-port-prof)# vmware port-group
Nexus1KV-SPOP(config-port-prof)# state enabled
Nexus1KV-SPOP(config-port-prof)# end
Nexus1KV-SPOP#

I guess this is pretty much self-explanatory. After issuing “state enabled” we should see this port group appears in vCenter:

SNAGHTML96d6af79

Of course, all VLANs that are going to be used have to be defined in a physical as well as in a virtual world:

Nexus1KV-SPOP#
Nexus1KV-SPOP# show vlan

VLAN Name                             Status    Ports
—- ——————————– ——— ——————————-
1    default                                 active
10   VLAN10                            active    Eth4/1, Eth6/1
12   VLAN12                            active    Veth1, Eth4/1, Eth6/1
112  VLAN112                         active    Eth4/1, Eth6/1
141  VLAN0141                      active    Eth4/1, Eth6/1
143  VLAN0143                      active    Eth4/1, Eth6/1

! output omitted

Nexus1KV-SPOP#

Step Bv: Adding VEMs

Ok, now it’s time to add some VEMs. We can imagine this procedure as adding line cards into physical switch. There are several ways of doing this. We could download appropriate VIB file from the Nexus VSM IP address (like we did with that Extension Key XML file),  put it onto ESXi hosts and do a manual installation. Or if we had VUM (Vmware Update Manager) and we do, this can be as simple as adding a host to the Nexus vDS. Just like we did with VMWare vDS.

We go to “Inventory->Networking”, click “Nexus1KV-SPOP” and click “Add a host”. We should have ready a list of unused physical NICs for each ESXi hosts we want to participate with this Nexus switch. In my case this is the mapping:

Server name Physical NIC
esxabacusbn1 vmnic2
esxabacusbn2 vmnic2
esxabacusbn3 vmnic0
esxabacusbn4 vmnic0

Now we add hosts:

SNAGHTML96e4d82e

We can see that for “esxabacusbn1” “vmnic2” is selected to be a member of previously created Ethernet Port Profile. Please note that all other servers are selected, but their NICs are not displayed here, although they are all assigned to the same port profile.

We click “Next”. On the next screen we chose not migrate any VMKernel port groups now. We could do that later:

SNAGHTML96e9c0a8

Click “Next”. We won’t migrate VM port groups either, so we make sure that “Migrate virtual machine networking” check box is unchecked. We click “Next”. At the “Ready to complete” screen we observe what is about to happen. We could see that each of four ESXi hosts contribute to this vDS with one NIC each:

SNAGHTML96ee1231

Then we click “Finish” and watch out for messages in “Recent Tasks” in vCenter:

SNAGHTML96f081e4

Now we could see status of our virtual supervisor and line cards:

Nexus1KV-SPOP#
Nexus1KV-SPOP# show module
Mod  Ports  Module-Type                       Model               Status
—  —–  ——————————–  ——————  ————
1    0      Virtual Supervisor Module         Nexus1000V          active *
3    248    Virtual Ethernet Module           NA                  ok
4    248    Virtual Ethernet Module           NA                  ok
5    248    Virtual Ethernet Module           NA                  ok
6    248    Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw
—  ——————  ————————————————
1    4.2(1)SV2(1.1)      0.0
3    4.2(1)SV2(1.1)      VMware ESXi 5.1.0 Releasebuild-838463 (3.1)
4    4.2(1)SV2(1.1)      VMware ESXi 5.1.0 Releasebuild-838463 (3.1)
5    4.2(1)SV2(1.1)      VMware ESXi 5.1.0 Releasebuild-838463 (3.1)
6    4.2(1)SV2(1.1)      VMware ESXi 5.1.0 Releasebuild-838463 (3.1)     

Mod  MAC-Address(es)                         Serial-Num
—  ————————————–  ———-
1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
4    02-00-0c-00-04-00 to 02-00-0c-00-04-80  NA
5    02-00-0c-00-05-00 to 02-00-0c-00-05-80  NA
6    02-00-0c-00-06-00 to 02-00-0c-00-06-80  NA

Mod  Server-IP        Server-UUID                           Server-Name
—  —————  ————————————  ——————–
1    10.x.y.38       NA                                    NA
3    10.x.y.41       32333536-3030-5a43-3232-323630313447  esxabacusbn1
4    10.x.y.43       32333536-3030-5a43-3232-323630313443  esxabacusbn3
5    10.x.y.42       32333536-3030-5a43-3232-323630313448  esxabacusbn2
6    10.x.y.44       32333536-3030-5a43-3232-323630313442  esxabacusbn4

* this terminal session
Nexus1KV-SPOP#

From here we can see several important things. First of all, we have our supervisor module powered on in “slot” one. This is our first VSM. Following that, we have four line cards in “slots” 3, 4, 5 and 6. These are VEMs installed on ESXi hosts. Please note that the “slot” two is not used by VEMs. It is reserved for HA VSM that we are going to add later.

One more thing we need to have in mind. If there is no port groups used by VMs on some ESXi host, we may not see the VEM module associated with this hosts powered on, until we add some VM(s).

Before we go to the last step, which is adding HA functionality, we could move one of our VMs to the Nexus and make sure everything is as expected. Let’s try again “W7-1”:

SNAGHTML96f6fd99

At this time we have a fully functional Nexus 1000V in our datacenter. One big step remains – adding HA functionality.

Step Bvi: Setting up HA

The process of installing secondary VSM is pretty much the same as with primary VSM. There are some minor configuration differences, though. But before we begin, let’s verify our current HA status:

Nexus1KV-SPOP#
Nexus1KV-SPOP# show system redundancy status
Redundancy role
—————
administrative:   primary
operational:   primary

Redundancy mode
—————
administrative:   HA
operational:   None

This supervisor (sup-1)
———————–
Redundancy state:   Active
Supervisor state:   Active
Internal state:   Active with no standby                 

Other supervisor (sup-2)
————————
Redundancy state:   Not present
Nexus1KV-SPOP#

As we can see, we have only primary VSM and no secondary. Let’s begin my deploying an OVF/OVA template, but this time in the wizard we name this VSM VM “Nexus1000V-SEC” and select “Nexus 1000V Secondary” as a type of deployment:

SNAGHTML9b3ed049

Next we select a datacenter, cluster and an ESXi hosts. Of course, we need to place this VSM on a different host, otherwise we would not have a HA. Then we select the data store, type of disk and Control/Management/Packet VLANs. We need to specify the domain ID (must be the same as with primary VSM) and admin password. The management IP parameters need to be specified as well. After we click “Finish”, we wait for deployment to complete.

That’s all, believe it or not! We just need to power this VSM on and wait for synchronization with the primary VSM. After a while, a HA should be working:

Nexus1KV-SPOP#
Nexus1KV-SPOP# show system redundancy status
Redundancy role
—————
administrative:   primary
operational:   primary

Redundancy mode
—————
administrative:   HA
operational:   HA

This supervisor (sup-1)
———————–
Redundancy state:   Active
Supervisor state:   Active
Internal state:   Active with HA standby                 

Other supervisor (sup-2)
————————
Redundancy state:   Standby
Supervisor state:   HA standby
Internal state:   HA standby
Nexus1KV-SPOP#

Now if we loose primary VSM, we are still going to be able to configure Nexus switch. Even if we lose both VSMs, our virtual infrastructure is going to work. We will not be able to make any changes in the networking, but VMs will still be able to operate.

Finally, our Nexus should look like this (as far as modules are concerned):

Nexus1KV-SPOP#
Nexus1KV-SPOP# show module
Mod  Ports  Module-Type                       Model               Status
—  —–  ——————————–  ——————  ————
1    0      Virtual Supervisor Module         Nexus1000V          active *
2    0      Virtual Supervisor Module         Nexus1000V          ha-standby
3    248    Virtual Ethernet Module           NA                  ok
4    248    Virtual Ethernet Module           NA                  ok
5    248    Virtual Ethernet Module           NA                  ok
6    248    Virtual Ethernet Module           NA                  ok

Mod  Sw                  Hw
—  ——————  ————————————————
1    4.2(1)SV2(1.1)      0.0
2    4.2(1)SV2(1.1)      0.0
3    4.2(1)SV2(1.1)      VMware ESXi 5.1.0 Releasebuild-838463 (3.1)
4    4.2(1)SV2(1.1)      VMware ESXi 5.1.0 Releasebuild-838463 (3.1)
5    4.2(1)SV2(1.1)      VMware ESXi 5.1.0 Releasebuild-838463 (3.1)
6    4.2(1)SV2(1.1)      VMware ESXi 5.1.0 Releasebuild-838463 (3.1)     

Mod  MAC-Address(es)                         Serial-Num
—  ————————————–  ———-
1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
2    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
4    02-00-0c-00-04-00 to 02-00-0c-00-04-80  NA
5    02-00-0c-00-05-00 to 02-00-0c-00-05-80  NA
6    02-00-0c-00-06-00 to 02-00-0c-00-06-80  NA

Mod  Server-IP        Server-UUID                           Server-Name
—  —————  ————————————  ——————–
1    10.x.y.38       NA                                    NA
2    10.x.y.38       NA                                    NA
3    10.x.y.41       32333536-3030-5a43-3232-323630313447  esxabacusbn1
4    10.x.y.43       32333536-3030-5a43-3232-323630313443  esxabacusbn3
5    10.x.y.42       32333536-3030-5a43-3232-323630313448  esxabacusbn2
6    10.x.y.44       32333536-3030-5a43-3232-323630313442  esxabacusbn4

* this terminal session
Nexus1KV-SPOP#

Now we can see both supervisors and all four line cards powered on and active. Make a note that we are always using one IP to access the Nexus. Secondary VSM has its own address, but we use one that we listed when installing a primary, regardless of which VSM is active.

One final step we should do:

Nexus1KV-SPOP#
Nexus1KV-SPOP# copy run start
[########################################] 100%
Nexus1KV-SPOP#

I believe that in California they say “Like, totally success, dude!” Smile

We should have and test a HA scenario and perhaps migrate our VMs.

At this moment we have all three type of switches: vSS, VMWare vDS and Cisco Nexus 1000V vDS and can compare functionalities:

SNAGHTML9b5a3e76

SNAGHTML9b5b19a8

This blog was huge! I hope I didn’t make mistakes and that this blog will be useful to someone.

Thanks for reading!

Advertisements
This entry was posted in Cisco, Switching, Virtualization, VMWare and tagged , , , , , , . Bookmark the permalink.

7 Responses to VMWare vSphere Distributed Switch – Cisco Nexus 1000V

  1. vtote says:

    Nice article, very well put together. I have for a long time been fighting the install of nexus 1000v as in my opinion it simply adds complexity (I am / was a CCNA). Unless there is a feature set that is vital it is not worth it in my opinion.

    • Sasa says:

      I can agree with the fact that this introduces more complexity. However, if you are familiar with Cisco products or you plan to install some products later on, such as ASA 1000v, then Nexus 1000v is a “want” or a “must”.

      And then, there is this thing called “role separation”.

  2. Pingback: Given a Set of Network Requirements, Identify the Appropriate Distributed Switch Technology to Use

  3. Radovan says:

    Hi Sasa,
    I find this entry, and the blog overall very informative and helpful. Keep ’em coming!
    And I have a question:
    Did you explore the pros and cons of L2 vs L3 control mode deployments for Nexus 1000v?
    Even better, did you manage to test both modes?
    Thanks!

    • Sasa says:

      Unfortunately, I never did a L3 deployment. They say that’s the deployment of the future and that L2 will be obsolete. Maybe I get a chance to deal with the L3 deployment some day 🙂 One apparent advatage that L3 has over L2 is that the VEM and VSM don’t not have to be in the same subnet.

  4. Nawin says:

    Nice article…. Very helpful…..

  5. Abid Abdul Latif says:

    Another implementation scenario is when setting up Cisco VSG .. Good article .. thanks for sharing

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s