What’s not to love about twinax? Formerly the exclusive domain of IBM systems, twinax has seen itself reborn in the last few years in the form of the Direct Attach Cable (DAC) used to connect systems at speeds of 10Gbps and 40Gbps (by way of bundling four twinax pairs in a single cable).
Direct Attach Cables
Before diving into the pros and cons of DAC, it’s important to understand the different varieties that are available. A DAC is a cable which has SFP+ format connectors hard-wired on each end; plug each end into an SFP+ socket and, vendor support notwithstanding, the link should come up. A direct attach cable is frequently and erroneously referred to as a “DAC cable”, so if the words “PIN number” give you the jitters, working anywhere with DACs is likely to drive you to drink.
Passive Copper DAC (Twinax)
The most common kind of DAC is the passive DAC. The SFP+ connector on a passive DAC, give or take some electrical protection circuitry, is pretty much a direct connection from the copper in the twinax to the copper contacts which connect to the host device:
Sending a 10G signal over a single copper pair requires some quite clever processing to take place in both the send and receive functions, but a passive DAC does not contain the components necessary to do so. Consequently, in order to use a passive DAC, the host device has to be able to do all the processing and signal amplification itself, which may increase the cost of the interface on the host. However, since the SFP+ connector is largely devoid of complexity, passive DAC cables are – as we say in the UK – cheap as chips, costing far less than a regular short reach optical SFP+.
Passive DACs are only for use over short distances, typically recommended for use only up to five meters, which means they are used most often for server to top of rack (ToR) connectivity. Longer connection lengths are available, but require use of an Active DAC.
Active Copper DAC (Twinax)
Active copper DACs are available in slightly longer lengths, but the increase to around a 10m maximum still means they are likely to to perform the necessary signal processing and amplification, so the switch itself does not need to have that capability and thus might be slightly cheaper to purchase, although it’s a trade off because each active DAC will be more expensive.
The improved embedded signal driving capabilities allow for the longer cable lengths, and the twinax copper cable itself means that active copper DACs are still relatively cheap.
Active Optical Cable (Fiber)
The new young thing in connectivity options is the active optical cable (AOC). In the same spirit as the copper DACs these cables come with an SFP+ connector hard-wired on each end, but this time instead of twinax, the AOC uses fiber. Because fiber’s transmission characteristics are so much better than copper, an AOCs using OM3 fiber can be as long as 300m.
If you’re wondering what the difference is between purchasing an AOC and purchasing two SFP+ transceivers and a 300m fiber, well, you and I are in the same boat. I used Fiberstore to see roughly what it would cost to connect two Cisco devices, and I found the following:
|Active Optical Cable||AOC 10m||$50||https://www.fs.com/products/30895.html|
|Regular Optical Connection||2 x 10G SR SFP+||$32 ($16 each)||https://www.fs.com/products/11552.html|
|10m OM3 Multimode Duplex Fiber||$5.70||https://www.fs.com/products/41736.html|
In this example, the AOC costs $50 and a non-AOC connection costs $37.50; the case for AOC is not immediately clear on a financial basis at least. It’s also worth noting that while in theory AOC supports up to a 300m connection (the same as a normal SFP+ transceiver with OM3 fiber), since the connectors are effectively part of the cable itself, it can only be used for a direct run from device to device; it’s not possible to connect using fiber patch bays.
For the purposes of this post, I’m going to consider AOC as being broadly equivalent to buying SFPs and using OM3 fiber.
The Benefits of Copper DAC
Without a doubt, using copper DAC means saving money. The Cisco switches and UCS servers I use all have support for passive copper DAC and based on a Fiberstore price of $24 for a 5m DAC, that puts the cost of a single connection at under half the cost of the 10m AOC and about 2/3 the price of the regular fiber and SFP+. Depending on how much your favorite vendor marks up their SFP+ transceivers, the saving may be even higher still.
Using a DAC means no transceivers to fumble and drop, and no need to fiddle around making sure that the LC connectors are pushed all the way home.
An all-in-one cable means no dirt in the connectors, guaranteed cable/SFP+ compatibility (because both ends are manufactured by the same company), no physical connections to work loose, and no ‘rolled fibers’ meaning a failed initial connection.
Where It All Goes Wrong
Size Does Matter
Size is where, in my opinion, copper DAC goes wildly wrong, and passive copper DACs are the worst offender. The problem is that despite twinax being screened, it’s still very susceptible to electromagnetic interference (EMI) and signal degradation. As a consequence, it’s normal to find that the longer the DAC is, the thicker the cable is. A one meter DAC isn’t so bad, but by five meters, the cable is getting pretty chunky and increasingly inflexible. If an installation contains only a few of these cables, that’s fine, but in a high density compute environment, passive DACs are a nightmare.
Even within a single rack, it can be necessary to use cables of 1m, 2m and even 3m if using cable management, and if the ToR switch isn in a neighboring rack, longer DACs get introduced and cable management rapidly becomes an issue. Active DACs are marginally better because they get away with using thinner cables for longer distances), but then the price is higher and it’s still bulkier than a pair of fibers.
Then there’s troubleshooting. The data center operations guys I speak to are unanimous in despising these dense DAC installations because trying to get to the rear of a device through a heavy, dense curtain of passive DACs is extremely unpleasant. One data center I know has now pretty much banned DAC going forward because of how difficult it can be to trace cables and access devices when the rear of the rack is, despite best efforts to attempt some cable management with those big, chunky DACs, almost inaccessible.
I’ve not taken any measurements, but I have to imagine that having a density of cables behind a rack of servers will be an impediment to the cooling process at some level. Anybody who has built a PC knows that routing cables carefully can significantly improve the cooling capability of a computer (and boy, didn’t we just love when SATA replaced PATA and we got to use nice thin cables instead of those great, flappy ribbons).
Copper DAC can make sense if used in limited quantities, but for denser installations I vote for fiber every time. Using AOC is certainly a valid fiber option, but given the price comparison I performed above, I’m struggling to understand the benefits versus using regular SFP+ transceivers and duplex fibers. Feel free to clue me in if I’m missing something there. It’s a real shame, because the price of DAC, if nothing else, makes it incredibly attractive as a budget connectivity solution.
Bonus Update (January 2018)
After this post went live, Twitter user @neojima responded that “DAC usually = Direct Attach Copper, so “DAC cable” isn’t really irritatingly redundant. 🙂” Fair point, well made. Perhaps my criticism of “DAC Cable” is a bit unfair; it seems that both Direct Attach Copper and Direct Attach Cable are both in use out there, and I stand corrected. That said, since AOC ends in “cable” it might be easier to refer to them as DAC and AOC as if the C stands for cable in both, just for consistency. However, thanks to @neojima for the correction!