Islands of Cisco ACI
If I understand correctly, Cisco sees a continued usage of existing products like the Nexus 7k and ASR9k in order to provide core routing functionality for “Pods” of ACI-enabled networking on the new Nexus 9k platform. In particular the 7k and ASR would be able to handle the data center interconnect (DCI) part of the equation with technologies like MPLS and Overlay Transport Virtualization (OTV), which are functions unlikely to ever natively arrive in the Nexus 9k platform.
This raises a few interesting questions.
If I understand correctly, the Nexus 9k has an intentionally limited feature set. Remember, the NXOS code on the 9k apparently has a 50% simpler code base, so adding in all those other functions would probably mess that up. Can I, then, have a fully Software Defined Network in a given data center? It sounds to me like we will end up with islands of Nexus 9k running ACI, but still need to maintain our legacy core and WAN infrastructure.
Within each island, that’s not a problem, but what about managing traffic between islands? What about managing traffic between data centers? As I understood it, the
ACIP instance discovers and controls its own local network fabric. By that logic, I will need to deploy a resilient ACIP control plane for each island of ACI that I have. Do they talk to each other? Can they? And with a non-ACI island in the middle, can either of them influence traffic going through the legacy core?
To stretch the island analogy almost transparently thin, is this almost “Ships in the night SDN” where each island (pod) is optimized, but chaos reins once you need leave the island. I’d like to know how Cisco plans to take the next step up the chain and manage the entire infrastructure – perhaps in a hierarchical controller fashion? I’d at least like to know if there’s a solution on the horizon.
Imagine you have a network with Nexus 7k, maybe with Nexus 5k/2k in play as well. Where does the 9k fit in your network without throwing out existing hardware? You may well want the benefits of ACI, but since you can only get that with the new Nexus 9k, it’s going to be a hard sell.
On the up side, it was a stroke of genius to allow NXOS on the Nexus 9k to run in Classic Mode or ACI-enabled Mode, so that at least in the first instance the hardware can be deployed simply because it provides great value 40G switching capabilities. It sounds like my core and WAN aggregation layers might be safe from replacement, but when we’ve barely got scratches on our existing Nexus hardware I can’t imagine jumping for the new product except in an expansion scenario.
One hope for integration across the DC was the promise that the open Southbound APIs that the Nexus9k supports will also make their way to the rest of the Nexus range. Specifically, this is the idea that you are no longer limited to talking “through” onePK, but the APIs will be provided natively instead. This is huge, in my opinion. And perhaps this is how some higher level controller could leverage the islands of ACI (and their
ACIP’s data) to coordinate over a wider topology.
Is Nexus 9k The Only ACI Platform?
Right now if you want ACI, there’s only one platform – the Nexus 9k. Will that always be the case? Well, Cisco suggests that the ACI capabilities will not be available in the Nexus 7k, but I don’t know whether to believe that (or whether minds will change later on). We were told when the Nexus 7000 launched that it would not have service modules (presumably another great decision from Cisco’s Switching Council at the time), but in February 2013 I wrote about the launch of the NAM-NX1 – a NAM module for the Nexus 7000. We were told that the Catalyst 6500 was out of steam, and the Nexus 7k was the logical progression for that role, then the Sup–2T was released and, subsequently, the Catalyst 6800. So when we’re told that ACI is exclusive to the Nexus 9000 platform, should we believe it?
The custom ASIC that Insieme brought to the Cisco product line resides on the line cards. On that basis, would it not be possible to put the same ASIC on Nexus 7000 line cards? The Nexus 9000 airflow is very clever and for a greenfield installation it looks like a strong choice, but I can’t see why the Nexus 7000 couldn’t also support it, especially as the one of the key selling points of the Nexus 7k was its future-proof expandability, that by upgrading fabric cards and Supervisors, the per-slot backplane capacity could increase dramatically over time, thus protecting your investment. Ok then, so shove the Insieme ASIC on a line card and that way you really do protect my investment.
Ah, but the Nexus 9k ACI-supporting version of NXOS is less complex than regular NXOS (“NXOS in name only,” perhaps?), so integrating ACI into the Nexus 7k’s code might be tricky. It does make me wonder how much of a shared code base the Nexus 7k and 9k versions of NXOS really have, and why this improved simplicity can’t be ported backwards into the Nexus 7k? Or is the simplicity really a result of stripping out features in order to optimize the 9k for a very specific architectural purpose; not necessarily a “simplified” codebase, but more of a cut down codebase?
As a potential Cisco customer with existing Nexus 7k switches, if I were to buy Nexus 9k hardware in order to get the ACI features then later on I found that the ACI features became available on Nexus 7k line cards, I would be pretty disheartened about my investment, I imagine.
Pulling some numbers from Cisco’s website, I created the table below to try and compare some similarly-sized products in terms of 10/40 Gigabit Ethernet port densities in order to better understand where the Nexus 9000 fits into Cisco’s product range.
|Product||Slots||Switching Capacity||10GE Ports||40GE Ports|
|Nexus 7009||9||8 Tbps||768||96|
|Nexus 7710||10||21 Tbps||384||192|
I’ve had to make a few assumptions here (I’m not including Nexus 2000 for example), and I’m a little suspicious of some of the figures (e.g. the 7710’s switching capacity which the data sheet says is 42Tbps but I calculate as 21Tbps = 1.32 x 8 x 2). Nonetheless, this puts the Nexus 9508 firmly at the top of the table in terms of port densities and, if I’m right, switching capability as well.
As some people discovered with the Nexus 7000’s varying linecard types though, there’s a little stumbling block here. The 9508 chassis supports three line cards, two 48-port 1/10GE, and one 36-port 40GE. The 40GE card is incredibly dense, but it’s important to read the small print carefully:
The last bullet confirms that this 40GE card will not run ACI mode and cannot be upgraded to ACI mode, so presumably does not have the Insieme ASIC on it yet. Presumably this relegates the current 40GE card to either Classic Mode switching.
The 10GE cards fare better, both claiming that they “Can be used in ACI leaf configurations.” But not spine? Or is the idea that you only need the ASICs on the leaf nodes to encapsulate / decapsulate, and the spine just needs to move IP packets and doesn’t require the Insieme ASIC? If so, that of course relegates any WAN aggregation function to a leaf node, something that Cisco hints at in their VMDC Design Guide 3.0, but that solution utilizes FabricPath so the (“spine” is in fact Layer 2 rather than Layer 3), and it’s not a design that I believe everybody buys into, with some preferring a L3 uplink from the spine to a WAN aggregation point for example. Perhaps we’ll see a VMDC Design Guide update incorporating the Nexus 9k? Or maybe we’ll see ACI-enabled 40G cards coming later? Meanwhile, any investment in the 40GE line cards right now cuts off ACI options later, so should be approached with caution.
The 9508 is positioned as the spine of a leaf-spine network, but not the core of the network. Cisco’s 9508 product page illustrates the use of the 9508 in a leaf-spine network:
…as aggregation (with 9300s as access below it):
…or as collapsed aggregation/access using Nexus 2000 fabric extenders.
The presence of the Nexus 7000 in two of those three solutions confirms what I was saying above, that a network with multiple pods will have islands of ACI connected by Nexus 7000 (or some other layer 3 solution).
Many of the opinions I have expressed here were already reported by Shamus McGillicuddy in his article on 7000 vs 9000 on TechTarget. I’ve tried to dig a little deeper here than the (necessary) soundbites in that article, because I really think there’s a lot more information needed before I would be ready to advise my clients to take a leap and purchase the Nexus 9000.
I have some sympathy with Cisco here in fact; I would assume that they have had to select which messages to prioritize in their marketing and communications about ACI, and perhaps felt that the bigger uphill battle was explaining ACI and why a hardware-dependent solution is a good deal for customers. That bigger picture is, arguably, far more important to many people than exactly how things work. As an attempted geek though, I can’t help but wonder about the reality of implementing a given solution in a data center, and integrating or migrating my existing network with or to the new solution.
Scheduling went awry and we weren’t able to have a deep technical session with Insieme while we were in New York, and perhaps that has colored my initial impressions of ACI and the Nexus 9000. I definitely find myself thinking that the ‘wait and see’ approach is safest, especially if the ACI technology might show up on a 7700 at some point. Meanwhile, if you want a high density 10/40GBps switch, it seems like the Nexus 9508 is an obvious choice, even if you don’t want ACI (and remember, it will still support a big list of Southbound APIs so it can be still be part of a Software Defined Network even without an APIC controller).
Hopefully shortly I’ll be able to fill in some of the knowledge gaps I have, or correct any misunderstandings, and I’d love to be able to come back with an update on my position. I’ll let you know.
So how do you see the Nexus 9000 fitting into your network? Do you share my concerns, or think they’re irrelevant, or incorrect? I’d love to hear.
(Updated 2013/11/25 to correct Freudian instances of ACIP instead of APIC!)
My travel, accommodation and meals on this trip were paid for by Tech Field Day who in turn I believe were paid by Cisco to organize an event around the launch. It should be made clear that I received no other compensation for attending this event, and there was no expectation or agreement that I will write about what I learn, nor are there any expectations in regard to any content I may publish as a result. I have free reign to write as much or little as I want, and express any opinions that I want; there is no ‘quid pro quo’ for these events, and that’s important to me.
Please read my general Disclosure Statement for more information.