VXLAN – One Tunnel to Rule Them All

I often feel like half my job is just keeping up with the new advancements and offerings that constantly churn through my inbox. I think the blogosphere likes to call it… “Disruption.”  Some trends don’t make it much further than the marketing department’s brainstorming session, but some actually stick and begin to be widely accepted as practice.

From my perspective, the trends that don’t persist are ones that don’t solve a “real” problem – sure, there might be marginal benefits to the technology, but not enough to change buying behaviors. The trends that stick are the ones that actually solve a big underlying issue, opening up greater efficiency and productivity. Server virtualization is a great example of a trend that “stuck.” When virtualization was first getting started, there were a lot of naysayers and many that didn’t like how it was changing the datacenter, but it made a very compelling argument for itself with the resiliency, power savings, and hardware savings that it offered. As a result, server virtualization is more or less standard these days, with software-defined storage following it closely.

There is one element of IT that is heavily lagging when it comes to modernization though – networking. If you look at some underlying principles behind network engineering, it makes sense. Networking at the core is a distributed communications system that uses very established protocols. No one entity “owns” the web, nor are updates rolled out across the world simultaneously. Tinkering with a protocol to “improve things” is more likely to cut you off from the rest of the world than help modernize your infrastructure. And in case of misconfiguration, a network outage is HIGHLY visible – if the network is down, so is everything else!

So, for those reasons, the network industry has been historically resistant to change. Configurations are still often done via the CLI. There is a steep tribal knowledge barrier to entry, and because networks can be taken down by forgetting a single keyword, any changes made to the network are tightly regulated. Networking follows the “if it ain’t broke, don’t fix it” mantra and that mantra has served it pretty well… for the most part.

I know I’m venturing into generalization, but I think networks are happiest when they are architected, configured… and then left alone. Nobody wants to be constantly logging in to the datacenter network infrastructure and making changes due to the massive blast radius when human error inevitably strikes.  Human error is one of the leading causes for network outages, after all.

So, this mindset causes several issues when working with large “webscale” datacenters. Some of the key advantages that a virtualized datacenter brings is resilience, efficiency and optimization. If VMs start to run into problems at the host level, a well-designed VMware environment will work around it seamlessly – via vMotion, HA, DRS, and other key technologies – and the users will never realize what’s happening behind the scenes. They just know that everything works and will continue to work. The VMware datacenter can be constantly refreshing, updating, optimizing, shuffling, and so on – all in the name of user experience and efficiency!

The problem is – how does this interact with a network that is designed to be static? Often the network is statically configured in a ‘set-it-and-forget-it’ style. VMware doesn’t natively have a great way to tell OSPF that a VM in subnet A has moved to the other side of the datacenter because it will run better over there. So, instead the virtualization engineers asked the network engineers to extend L2 everywhere so communication wouldn’t be broken… and the network engineers wailed and gnashed their teeth, but it was all for naught as the project had already been approved and the network engineers hadn’t been invited to the initial planning meetings. L2 extension does allow VMware to be as dynamic as it pleases, because RARP allows VMs to move around on the network and they don’t have to be re-IPed each time they move (which would cause all kinds of complexity with user access as well)… but it comes at a cost.

Those of you familiar with networking no doubt recognize some of the pain that is inherent with L2 communications. It’s fast, sure – but in its raw state, volatile. A L2 segment can often rightly be considered a failure domain. Broadcast storms cause headaches, MAC tables have hardware limitations, some flavor of STP is a necessity, and so on. L2 is only meant to have a single path for data, which throws conventional redundancy options out the window. L3 by comparison is stable and resilient. During my slow steady march to the dark side of datacenter networking I had to learn all these complex ways that companies were trying to solve the L2 datacenter problem in network hardware – SPB, TRILL, Fabricpath, VPLS – the hits kept on coming. And frankly, while I know that these are running happily in many production environments, they all seemed very complex to me and they required some expensive hardware and licenses just to function. (And to make matters worse, the one that made the most sense to me, TRILL, developed by Radia Perlman who introduced the original Spanning Tree, was just announced as dead by some rather prominent individuals in the networking space due to vendor ASIC limitations. That’s a bummer.).

I know that following my thought pattern is a bit of a rabbit trail here, but recall what I mentioned at the start of this blog – in order for a technological trend to catch, it has to solve a real problem. And hopefully I’ve illustrated that there is a real problem when pairing a dynamic environment with an environment that resists change.

Enter VXLAN, a newer protocol that is gaining a lot of steam. At the core, VXLAN is another tunneling protocol that can encapsulate frames inside of packets. It orchestrates this through VXLAN Tunnel End Points (VTEPs), which act as entry points into the datacenter fabric and construct and tear down tunnels between each other to shuttle traffic around. The concept of tunneling isn’t new and VXLAN is not vendor proprietary; it was originally created in collaboration by Arista, Cisco and VMware and it is being utilized by many vendors today. It allows the creation of an “Overlay” network that can be defined in software and runs over the “Underlay” network that is created via physical hardware.

The “Underlay” network can be constructed with a L3 design in a rock solid, resilient configuration. Network engineers only have to ensure that there is basic IP connectivity across the datacenter, and that the MTU is large enough to accommodate the VXLAN header that is added to the frame. Once the Underlay is in place, it will not often have to change.

The “Overlay” network is where the real magic happens. Using VXLAN, software can rapidly construct tunnels that ride on top of the “Underlay” network, shaping the network as required by applications. If a VM needs to think that it has L2 adjacency to a VM on the other side of the datacenter, the VTEPs can construct a L2 tunnel between the two VMs. No hardware changes are required and again, the physical network hardware only needs to give basic IP connectivity between the two hosts. Even better, because the VTEPs in software keep track of MAC address tables and L2 connectivity, your hardware only needs to remember where the VTEPs themselves are and how to move traffic between the VTEPs. Everything else is out of sight and out of mind of your physical infrastructure. The real weight of the MAC address table is now in software, rather than inflating the cost of our hardware.

So, as I mentioned, there are several solutions out there that utilize VXLAN – Arista, Cisco’s ACI, VMware NSX, and others. We at Edge are particularly excited to be working with the VMware NSX platform (I believe almost our entire engineering and technical team hold the VCP-NV badge of honor and yours truly is aggressively pursuing it). VMware has been virtualizing servers and abstracting storage for years now; it only makes sense that they would also tackle the last piece of the trifecta – the network!

VMware networking in its raw state has some limitations. First, the standard vSS/vDS is not able to route traffic – it can only switch traffic. This results in some less than ideal traffic patterns, as traffic is hairpinned on the physical network to move between VMs in a segmented multi-tier application. In addition, the physical network has to be configured to support changes in the virtualized network environment… and if network change control is in place (as it should be), this can take a long time!

VMware NSX solves these issues by bringing routing, security, edge services, load balancing, NAT and more down into the virtualized environment. Virtualization administrators can now spin up all the network services that they need within the virtual environment. By bringing L3 intelligence all the way down into the host, the hairpinning problem is solved. By enforcing network traffic policies at the vnic level, even L2 East/West traffic is now secured.

As a result, the physical network is more or less abstracted with VMware NSX. If VMs need to be on the same subnet to operate but they get moved to separate hosts, VTEPs on each host construct a VXLAN tunnel between the two hosts to allow the traffic to pass through as if they were still L2 adjacent. These tunnels are architected by the VMware NSX platform in software, allowing them to fulfill the promise of a Software Defined Data Center.

VMware NSX consists of several key components:

  • NSX Manager
  • NSX Controllers
  • NSX vSwitch
  • NSX Edge Services Gateway

The NSX Manager integrates with vCenter Server to coordinate management across the VMware environment. It’s important to note that if the NSX Manager goes down or otherwise becomes unavailable, NSX will continue to function.

The NSX Controllers share the burden of the control plane and coordinate network functions across the vSphere, ensuring that changes are kept in sync.

The NSX vSwitch is where L2, L3 and firewall decisions are made – within the host! This piece of NSX is responsible for a great deal of what gives the platform it’s “teeth,” so to speak.

Finally, the NSX Edge Services Gateway provides additional services that are not included in the vSwitch – services like IPsec VPN, NAT, load balancing, DHCP, DNS relay, and more. Edge Services Gateways can also be deployed as Perimeter Edges, which bridge the gap between the physical network and your virtual environment. Using dynamic routing protocols like BGP, OSPF or IS-IS they can coordinate changes between the two environments and advertise any changes you make in the NSX environment.

All these components work together to create a software defined data center. Finally the last piece of the puzzle, networking, has been virtualized and can collaborate properly with the VMware environment. It’s starting to take off, too – if you watched VMware’s earning report a few weeks ago, you’ll know that NSX is on track to be a billion dollar product line by the end of 2017. That’s probably artificially boosted by the steep price tag that NSX carries, but hey, still impressive.

There are many other benefits that NSX offers the datacenter that I haven’t covered in this blogpost – service-insertion for traffic filtering, microsegmentation, and so on – and there are many different ways that it can be implemented. Really, I’ve only scratched the surface. But I’ve rambled on more than long enough at this point. If you’d like to learn more about VMware NSX and start going into specifics about how it will interact with your environment – drop me a line and I’d be happy to compare notes. I have a bunch of big books filled with all the specifics, fine print, addendum and nerd knobs.

It’s 2017 – Stop Guesstimating!

I’ve been involved in a number of planning meetings for IT infrastructure refreshes – servers, storage, WAN, networking, wireless, you name it. Most of the time the requirements are more or less well-defined and well thought out… number of hosts, exact processors, memory requirements, connectivity to the storage arrays, VLAN assignments, and so on. All the I’s have been dotted and the T’s have been thoroughly crossed. But there’s always one noticeable outlier – the wireless network.

“Oh and… maybe 10 APs? Maybe 12 just to be safe.”

It’s 2017 and we can simulate just about everything. Why are we still using primitive thumb based rules for the wireless network?

  1. If you guess too high, you’re out a sizeable chunk of money and you may  not even get additional bandwidth.
  2. If you guess too low, you’re going to have slow/dead zones. And good luck getting management to not only buy more APs in a separate purchase, but also to pay for the cabling.
  3. CCI is a real thing. Adding more APs does not necessarily add bandwidth. It’s the laws of physics!
  4. Just “cranking up the coverage” doesn’t always work either. Client radios are only so strong, and the conversation has to flow in both directions.
  5. It’s ill advised to go with the “low capacity” model and just throw extra APs up in high density areas as needed, again due to the laws of physics. Be sure to choose beefy APs for the auditoriums, gymnasiums, and so on.
  6. One AP per room is not a valid design principle.
  7. A design will help you determine how to balance building for 5GHz coverage while not creating a trainwreck in the 2.4GHz spectrum (hint: spectrum analysis isn’t a bad thing to have running in the background)
  8. Wireless is how we connect now. If your wireless performs poorly, it doesn’t matter how slick a datacenter you’ve built to serve up the applications, people will only remember that it didn’t load properly.

There are programs available that can fully simulate wireless propagation through all kinds of floor plans and allow you to effectively preview the performance of your wireless network, all from the comfort of your own home. While it’s still advised to take some onsite measurements in complicated deployments, a predictive survey will give you a much better starting point than the infamous “1 AP per room” design.

I’ve been using Ekahau Pro myself and it’s been a life saver.

You probably won’t be refreshing the wireless infrastructure for another five years – why not ensure that it’s going to work well?

Thanks for reading this slapdash semi early morning rant. And no, I am not being compensated by Ekahau for this blog.

General Blogspam

Sorry it’s been so quiet over here lately. I’ve been engaged in several new projects that are taking up the majority of my time. I don’t have anything to share with you today that I’VE created… but I have something better! Something that OTHER people have created!

Here’s a few articles and videos from around the web that I’ve particularly enjoyed:

First, a good overview of the inherent issues with long distance vMotion. I’ve been doing a deep dive into VMware lately to better understand exactly why it asks for unfun things from the network, like stretched L2. This sums up my feelings nicely:

Long Distance vMotion is a Dumpster Fire by Tony Bourke (he has a lot of other great videos)

Second, a nice whiteboard session into VMware NSX:

VMware NSX Overview by Rob Riker

Third, a good read on the state of BGP as the internet fabric:

BGP in 2016 by Geoff Huston

As far as my latest tinkerings, I’ve had the opportunity to play with a WiFi pineapple by Hak5. It’s a neat little device that lets you launch all kinds of wireless attacks. I don’t plan on doing anything all that nefarious, but I do have several interesting packet captures that I’ll likely be blogging about soon.

As mentioned above, I am working towards the VCP-NV so I can better understand the intricacies of datacenter requirements and how NSX ticks. It’s eye-opening stuff coming from a campus background.

HPE has announced that they are moving away from the Comware product line and on to the Arista line for any datacenter opportunities. That’s a bit frustrating as I spent a great deal of time learning the Comware side of the house, but I’ve heard a lot of good things about Arista. HPE was kind enough to set me up with a free three day boot camp for Arista in the next several weeks; more to come on that.

Finally, I am happy to announce that Edge has invested into a full wireless site survey kit with Ekahau, Metageek, and more! I’ve used these platforms in the past, but I’m taking time to brush back up on them now. Edge is now able to offer full professional predictive surveys, onsite assessments, hybrid surveys, spectrum analysis, and more. Even better, I already have projects to work on. I’m excited to be getting back to the wireless side of things!

 

 

 

Exploring EAP

As discussed in an earlier blog WPA2 Enterprise allows a network to authenticate each user with unique credentials, rather than a blanket passphrase. This is done through an EAP exchange over the 802.1X framework. We’ve touched on this briefly in a post about RBAC, but I wanted to take some time to review two of the different flavors of EAP so you can make an informed decision about which one is best for your organization.

There are two predominant versions of EAP being installed today – PEAP and EAP-TLS.

Protected Extensible Authentication Protocol (PEAP)

PEAP technically has multiple versions, but they all use the same basic “protected” transfer structure. The issue with legacy EAP was that the supplicant’s credentials were transmitted in cleartext… which is less than secure, because it opened up the potential for someone to intercept a username. PEAP corrects this by using two EAP exchanges – outer authentication and inner authentication. The outer authentication is a “dummy” EAP exchange (often using a ridiculous username) that sends information in cleartext to construct a temporary TLS tunnel. Once that encryption cover is in place, the inner authentication takes place securely inside the tunnel, out of sight of roaming sniffers. Can you see why it’s called “Protected”?

The most common version of PEAP is EAP-PEAPv0 (EAP-MSCHAPv2). The protocol used for the inner authentication is EAP-MSCHAPv2, which utilizes a user name and password as the credential. When using PEAP you will need to provide a server side certificate and set up a repository of usernames and passwords for individual client authentications. When your users connect to the network they will validate the server via the installed certificate and (if the trust is there) provide their unique credentials.

This option is popular because PEAP is widely supported by client devices and it allows you to easily utilize an existing database, like Active Directory. Your end users are already used to logging in to the domain using their AD credentials, so asking them to use those same credentials to log on to the network isn’t a stretch.

Extensible Authentication Protocol Transport Layer Security (EAP-TLS)

EAP-TLS is one of the most secure authentication methods available today. Rather than using the standard username / password combination for authentication, EAP-TLS requires both a server side and client side certificate. This means that you can avoid some of the risks inherent to passwords, like a user entering their credentials to connect to the network with an unapproved device or sharing their credentials with another employee. In order to connect to a network secured with EAP-TLS the client must have been provisioned a unique certificate by the network’s public key infrastructure (PKI).

The advantage of this level of security is also its inherent disadvantage. It can be a chore to implement a PKI if you don’t already have one in place. Provisioning this certificate based trust relationship to the client can be a straightforward process with Active Directory’s Group Policy when using Windows hosts, but it can introduce complications with some brands of smart devices unless you have a provisioning mechanism in place… manually installing certificates onto smartphones can be above the skill level of some users and it will bog down your IT department significantly.

Finally, if you do choose to go with EAP-TLS as your EAP structure, be sure to put some heavy security around the server that is housing your PKI.

I don’t mean to scare you away from EAP-TLS, as it is the definitive secure authentication solution. You should just be aware that this security comes at an administrative cost. If you want to go with EAP-TLS I would recommend looking into onboarding software to help provision certificates to your endpoints, like Aruba’s ClearPass or Cisco ISE.

EAP Options to Avoid:

There are many many many other EAP flavors available, too many to effectively cover in a single blog post. PEAP and EAP-TLS are the ones that I see the most often in the enterprise, with the occasional EAP-TTLS popping up here and there. However, there are some legacy EAP options with noticeable security flaws that are worth covering here, less to help plan the future and more to help you avoid these legacy options!

EAP-MD5 – This was one of the first EAP options available. It has severak major weaknesses – first, it does not use tunneled authentication, meaning that the username is transmitted in cleartext. Second, it uses MD5 encryption, which is not a very strong encryption suite. Third, it does not validate the server, opening you up to MiTM attacks.

EAP-LEAP – The username is transmitted in cleartext, which opens you up to social engineering attacks. There are better options out there.

Thoughts on the CWSP…

So, today I passed the Certified Wireless Security Professional (CWSP) exam. For those of you not familiar with the CWNP program, it’s an intensive vendor-neutral certification path that delves deeply into 802.11 tech… VERY deeply. It’s been very beneficial for my career, and it’s one of the few educational courses that I truly enjoy. Anyone interested in learning more about how wireless ticks should take a look at the CWNA at least.

The CWNP program begins with the CWNA as the foundational wireless cert, then it branches into three separate specializations – Security, Analysis, and Design. Once all four exams are completed and a lengthy application submitted (essay questions and all), you can become a candidate for the Certified Wireless Network Expert designation. It’s pretty elite, with only ~150 or so CWNE’s in the USA. I’m gunning to complete the CWNE application by the end of 2017.

For the CWSP course, I used the following resources:

  • CWSP Official Study Guide PW0-204
  • CWSP Official Study Guide CWSP-205
  • Extensive use of the Sybex online companion included with the CWSP-205 book
  • Sample tests available directly from CWNP

I did NOT use the Certitrek guide published in 2015, as those books have not been well received.

The CWSP course covers wireless encryption methods, EAP, fast roaming mechanisms, the different handshakes and key hierarchy, RADIUS, LDAP, MDM, and much more. The book was great. And HARD. Lots of detail to sink your teeth into. I had some issues with incorrect questions on the Sybex portal, so just remember that you can’t 100% trust what their exams tell you when you do your review.

As for the exam?

Honestly, it was disappointing. I won’t give much away, but it was way too easy compared to the level of knowledge demanded from the study guides… and I do have other concerns that I don’t want to share in a public forum. But don’t let that scare you away from the course – the knowledge gained was great, but I personally feel that the ease of the exam cheapened the certification a bit.

To those of you looking to take the exam, keep hammering through the practice exams from the CWSP-205 book. Once you can pass them reliably, you’ll be more than ready for the real thing.

What’s next for me? I had planned on taking the Wireshark course next to get more familiar with packet analysis, but my work is requesting that I chew through the VCP-NV next, so packet analysis will have to wait a month or two.

On to VCP-NV, WCNA, CWAP, then CWDP!

Merry Christmas to all.

Software Defined Stuff

vmware-nsx

 

So, this is going to be a bit of an informal blog post, but I’m in the middle of a weeklong boot camp for VMware NSX and I wanted to share a few things I’ve learned with the internet.

First, this is one of the first times that I’ve had a chance to get down-and-dirty with VMware. Shameful, I know, but I’ve really been more historically focused on layers 1 – 4. So I appreciate the chance to see things from the other side and broaden my horizons.

Second, and this was the topic of much debate in the class, I think that NSX is going to require that the virtualization team learn networking. One of the big bottlenecks in IT without NSX in place is waiting on the network team to make changes to the physical infrastructure to accommodate the changes put in place at the virtual level. NSX now allows the virtualization team to deploy virtual switches, routers, and firewalls (making L3 routing between hosts in separate subnets possible in a strictly virtualized environment without having to hairpin on the physical network) AND it abstracts out the hardware layer when traffic needs to go between hosts. Frankly, the networking team is not going to be the ones logging in to VMware and creating the new virtual switch port groups every time a change needs to be made…. That’s going to be the virtualization team.

As a side note, NSX’s virtual environment still plays by the same rules as physical networks, so the networking knowledge that you’ve obtained will not go to complete waste in this “bold new future.” OSPF still has all the same timers and requirements, you still need to redistribute routes, subnetting didn’t go away, and good old BGP is still trundling along. And yes, the VMware virtual router can interact with physical routers, creating adjacencies and sharing routes and all that good stuff.

Third, in a completely virtualized environment, this really simplifies the maintenance and design of the physical network. NSX uses VXLAN tunnels to achieve L2 adjacency between VMs. These tunnels are automagically constructed in software, and can be torn up and torn down regardless of where the VM winds up in the datacenter as long as there is basic connectivity between hosts. When I was working through the MASE exam earlier this year, I felt like HPE was shouting to the heavens “Look at us! We can do all this finicky L2 extension to satisfy that blasted vMotion requirement, same as everyone else! We have TRILL! We have SPB! We have VPLS! We have EVI!” Well, now it seems to me that all that doesn’t matter as much anymore because all NSX requires to simulate L2 adjacency across the datacenter is underlying IP connectivity and an MTU of 1600 or more. You can picture the physical network as a rock solid underlay that shouldn’t require tweaking after the initial setup and NSX as the overlay that constructs tunnels over top of it. The networking team of yesteryear can create one big BGP datacenter and let it run… and NSX will do what it needs to do without any outside intervention required.

Fourth, I would highly recommend taking a look at this technology or a similar tech if you’re a network engineer. I know, I know, it’s marketing heavy and virtualization of the network isn’t happening as quickly as any of the highly paid experts predicted, but it is happening. Personally, I would rather be the one having fun designing the network layout in VMware rather than be the one ensuring that the physical underlay is still in place.

Early morning slap-dash blog post with my two cents, make of it what you will.