One of the classic interview questions is “Where do you see yourself in five years?”
Discussions today with the delegates from Tech Field Day’s “Cisco Application Centric Infrastructure Launch” event led us to asking where the future lies for Network Engineers.
So what do you think you’ll be doing in five years?
Software/Hardware Defined Networking
Unless you’ve been hiding under a network-based rock or have otherwise been ‘off-net’, you can’t have failed to notice that the subject of Software Defined Networking has been coming up a lot. Many others are making the claim that, as network engineers, we all need to be learning Python if we want to stay in a job. Cisco’s Insieme announcements, while arguably tantamount in part to Hardware Defined Networking, doesn’t seem to change that. The argument is that network orchestration requires programming, so that’s where the future lies.
Once upon a time, I used to write documents using a pen and paper. Then I used a word processor (Wordwise Plus was my first I think, on the BBC Model B Microcomputer some time in the 1980s). Then with time and a job I migrated towards Wordperfect and finally Microsoft Word. Where once I had to get my hands dirty when I made a correction, I now just press a (backspace) key, and the correction happens magically.
But wait – the creation of the word processor effectively totally changed my writing process, and I had to learn a new skill (typing and editing on a computer). But did I have to learn how to write assembler or a higher level language? No, of course not; programmers did that, and I carried on with my creative process via the abstracted interface that is a word processor. You can see where I’m going with this, right?
We Will Not Program Networks
At least, I can’t imagine that I will. I’m most likely to rely on tools written by other people that will abstract the process of configuring the network in such a way that while I get all the gains of a highly flexible software-controlled network, I don’t actually have to write code to send bits and bytes to the end device. I don’t have to learn how to use an OVSDB module for Python in order to make this happen; I’m going to rely on somebody doing a darned good job writing far more reliable code than I can ever create, and doing far more clever things with the data it has than I can probably ever imagine.
I Just Talked Myself Out of a Job
There’s a problem here though. One of the examples I hear given often for how SDN will make the world a better place is provisioning. Right now the process of turning up a new switch port and VLAN is onerous and slow, and we network engineers (apparently) are slowing the whole company down and getting in the way. With an SDN-enabled network, we can click a few buttons and the job is done, without errors, in minutes.
So if a job that used to take me a couple of hours now takes me a couple of minutes, what do I do for the rest of the day? More to the point, if the software interface is so good at what it does that anybody can set up that interface, then why does the company need me any more? After all, I won’t need to know the CLI commands nor whatever method is used to configure that port; my specialist knowledge is unnecessary in this instance. So the provisioning jobs can be done by anybody straight out of school, and possible even earlier! So will deploying SDN effectively be the end of my own career? If so, what’s my motivation beyond an altruistic desire for the the general betterment of the network?
The CheckPoint Firewall Problem
I like pointy clicky interfaces up to a point, though I’m generally happier noodling around in a CLI. There are some tasks that do work better in a GUI though, so I’m willing to accept that sometimes they’re the better tool for the job. Personal preference aside though, I have a general problem with GUIs and it may sound a little bit snobby, but it’s this:
GUIs let people who don’t understand the technology think that they are capable of managing it.
Your honor, may I present Exhibit A,the CheckPoint Firewall. There’s nothing wrong with CheckPoint firewalls, nor am I suggesting that people who specialize(d) in configuring them were anything other than extremely bright people. However, who among us has not come across a few installations of a CheckPoint firewall at some point where the “administrator” was not a security engineer but a “Network Administrator” (i.e. server admin) who took on or was given the responsibility for the firewalls, and because it was a GUI, managed to somehow muddle their way through making traffic pass through. And when that happens, much of the time the resulting configuration is abominably insecure. I have, quite seriously, found a ‘permit any any’ equivalent at the top of an Internet firewall ruleset – with 200 rules below it – because all the admin knew was that it worked after he did that, and he stopped seeing “problems” after that. The “problems” were requests for new ports to be allowed through, reported as “Application X has stopped working” – yes, because there’s no rule for it. So that’s my problem. Had this been a PIX, the system admin would most likely have run a mile and brought in somebody who knew what they were doing. Add pointy clicky stuff to it, and suddenly everyone’s a security guru.
Security Engineers Aren’t Safe Either
Oh, were you smiling from your dark silos? Well don’t, because in the new world those firewalls you love may well be deployed on VMs and instantiated as needed by the network, with rule sets deployed and updated through another abstracted interface. Again, you probably won’t be programming that interface, just clicking “Approve/Deny” on the requests as they go past.
Rack and Stack
Don’t worry we still need you guys and gals. The one constant seems to be that there’s always new hardware to install, fibers to run, and cabling to check!
I Can Troubleshoot!
Perhaps that’s the purpose we’ll have in the new world. But then, in today’s Insieme presentation Cisco showed a screenshot of their (planned?) monitoring platform which included real-time application health and flow telemetry, including performance issues (latency, drops, etc.). If the management software can diagnose the nature and location of the problem, troubleshooting becomes unnecessary and all we’re left with is the corrective action. Mind you, a smart fabric will learn to route around problems where possible and will simply tell you that you need to check a fiber or replace a line card or something.
Five Years’ Time
So what do you think you’ll be doing in five years’ time? Ok, so as a reality check, smaller networks are less likely to have gone fully SDN I suspect, but medium to large enterprises and even service providers may well have gone partially or fully in that direction.
Assume for a moment that you work for a company that migrates entirely to some form of SDN platform with a nice clicky abstracted interface and strong orchestration processes.
What’s your job going to look like? Let me know; I seriously want to hear your opinions so I know whether to change career or not.
Update (11/7/2013): Please do check out my follow up post, where I drill down into some of the assumptions in this post and respond to some of the feedback I received.
You bring up some interesting points. I’m glad you wrote about this topic because I feel rather strongly about it. To be straight to the point, Im not worried about my job as a network engineer (as a side note, Im also not buying this entirely software based network either but that is neither here nor there).
I firmly believe that some of the changes we are witnessing today will revolutionize networking as a whole. That however, by no means implies that us CLI guys will be out of a job. As some have cleverly coined the term, the ‘underlay’ still needs to exist for most of the technologies to ride on. We’ll still need to understand spanning tree (Not everyone can afford fabric path), OSPF, and BGP. Hardware for networking (in my opinion) will continue to be network hardware. That is, we wont be running everything on x86 compute hardware. That being said, you still need guys that understand that a 6548 line card has port-groups of 8 ports that share a 1 meg buffer and that they can be easily over-provisioned.
My point is that there will always be network specific tasks that you need a network expert for. Can some of these tasks be done by someone who isnt a network engineer? Sure, but will they be? Do you know many compute guys who care what the spanning port cost is for a port-channel interface vs a single interface? Or that cringe when they hear that TCAM utilization is high? It’s likely the same reason that I don’t get paged when there are SAN utilization issues.
Would it be nice if one person could be an expert at all of this? You bet! Will that happen in the enterprise? I don’t see it…. I tried for a long time to stay on top of compute, virtualization, storage, and networking. It’s like trying to have 4 full time jobs.
What do you think? Are you worried about your job?
Hope all is well!
No, I’m not seriously worried about my job – and I have deliberately made some pretty huge assumptions in writing this post, in part because trying to account for every situation is just too onerous (and let’s face it, it makes for a more thought-provoking headline). The obvious exceptions are that Service Providers will have a very different set of challenges to deal with, and any enterprise that does not go with a pure SDN play will be in a slightly different situation (including those who don’t go down that path at all)
My expectation is that as more processed become automated (and hopefully more reliable), we’ll simply adjust what we do and use the spare time we might have to do other, more interesting, things. Maybe I’ll even end up coding, though I hope not because if I wanted to be a programmer I would already be doing that.
I do believe that the underlay, at least in terms of the core fabric, is moving towards being far more self sufficient and “plug and play”. How much more intelligent the network gets beyond that will largely be a product of how smart the controllers can be, and how well they can adapt. The controllers aren’t magic though – they have to be fed information (I have an upcoming post on this topic) – but whether that will require the same level of expertise in the same numbers, will be interesting to see. Thanks!
For me this whole process is about de-duplication of effort allowing more time for creativity. The machines can not create but they can take a lot of the strain out of configuration and novel new ways of controlling the ‘fabric’. The machines will not have the intelligence, understanding of business goals, creative inspiration etc. that an experienced engineer or designer will have.
What I don’t fully yet grasp is how younger or inexperienced engineers might gain the hands on experience and understanding when the likely outcome of the various SDN implementations is that what’s going on at the low level is heavily abstracted from them via many layers of software. I’d argue that I see this already with virtualisation tech such as vSphere. People know where to click, but they don’t know why they click and, as you state in your original post, that is dangerous.
Am I worried about my job? Not really. I’d be more worked about the new entrants to the industry becoming competent engineers.
Side note: aren’t/weren’t SDH networks software defined?!
Similarly you could argue that MPLS-TE networks are software defined too.
I think you make a very interesting point about the entry level, and the potential for such huge abstraction from reality that you may end up with a generation of network engineers that don’t know what’s really happening. Now, that said, we’ve survived that in computing pretty well; I can program in perl and, largely, not need to understand how it ends up generating machine code for processing. At least, not until I need to try and optimize my code, at which point a level of understanding is required. IP itself is an abstraction to many newer engineers, I find – they know how to allocate an IP address, but may not truly understand subnetting at a binary level and when you ask about ARP or anything deeper about Ethernet frames, they’re lost, because Stuff Just Works these days and it’s not an implementation detail that they’ve ever had to be bothered about.
I’m about all simplifying effort by the way – it’s why I learned perl. The trick will be to figure out how the landscape will change, and how we need to position ourselves afterwards.
Some good points made in your post ! however I feel 5 years may be a little too soon for us all to be out of a job, I remember being concerned a few years ago when it seemed lots of companies were out sourcing their network infrastructure to 3rd party companies with a low skilled work base, as a contractor how could I compete with that if there was no network to work on.
But in the past 2 years I have been offered more and more contracts from companies to be part of a project to bring the network back in house, you have all heard of the phrase ” If you think its expensive to hire a professional, wait until you hire an amateur”. That’s pretty much what happened you had low skilled workers making changes on hundreds of customers networks that they didn’t really understand and eventually something catastrophic would happen.
I think this could be the same kind of thing, you will have tier 1 guys clicking away making changes to the network, but what happens when something goes very wrong and a more in depth understanding of how things really work is needed ?
Very good points, Ricky. Yes, 5 years is undoubtedly to soon (hyperbole much? ;-), and I’ve happily implied that this applies across the board, when in reality this impact will affect only certain types of data centers and certainly not all networks.
To address your tier 1 comment, when the automotive industry brought in robots to build cars it was not a positive experience for the people who worked on the front line building cars. Those people had a knowledge of the physical aspect of the cars that few others would have, and I’m sure that when there were problems with the robotic assembly, the company had to consult with somebody with that experience in order to understand why something wasn’t working properly. I wonder what happened to those front line workers? And I wonder what percentage them were retained to provide experience for when those problems arose? After all, watching a robot build a car – or lining up supplies for the robot to use – does not require somebody will the same strong engineering skills that the assembly line used to require. Tier 1 assemblers, as it were. Maybe those skilled assemblers changed focus and started building the robots instead, so not job loss just a migration of role. Guess I’d better get my Python on, huh?
Although I am preparing myself somewhat for apocalypse I think 5 years is very optimistic. Lots of enterprises still use frame relay in supposedly technologically advanced cultures, some people still cling on to their sub 64k circuits. I guess the main issue for adoption from where I see it is capital investment. Money is tight no matter how big an enterprise or SP is. A bean counter would need a really, really convincing arguement to sign off a cheque for something that we can seemingly do without.
Regarding GUIs, people still screw up point and click builds even when they have pretty pictures to show them what to do. There will probably (and hopefully) be a requirement for people who understand the underlying whatevers. Dumbing down of skill sets is likely to lead to problems, and major problems at that. When something goes wrong and people have either forgotten how to engineer or have been removed because they cost money, you lose money. Knowing how to click something and why you are clicking it are two massively different things.
Now where did I put my dummies guide to Python 🙂
*lol* As I said in a comment above, I do concur – 5 years is a bit keen.
Perhaps the difficulty with the pointy clicky stuff is that it may change the balance of network teams from having a team of experienced engineers with a few new folks being trained up, and shift to having a team of tier 1 pointy clicky people and a couple of experienced engineers to oversee. What do you think?
Thanks for posting!
I believe we share a common employer 🙂 not sure what interaction you would have with the provisioning side of things in your neck of the woods but I agree. As things get simper to implement teams that do the easy stuff increase, mainly entry level knowledge, and the few remaining ‘pros’ are kept for when stuff either needs design or when something goes really wrong, I.e. Architects and technology specialists. In all likelihood as less specialised teams increase the more specialised folk decrease through churn and dont get replaced because, as was mentioned above, things just work these days
There is no excuses. Go learn and program in python, that’s a future of network engineer.
response = sys.argv
%response.py 'Sir, yes sir!'
I’m not concerned about higher Tier jobs, but I am concerned there will be no gateway to entry positions. Tier 1 jobs are a great training environment for people to learn how to troubleshoot and reinforce networking concepts they learned in a classroom. If those are largely killed by automated provisioning systems there is going to be a large decrease in the number network engineers coming up through the ranks who can fill Tier 2 and Tier 3 positions and really understand what is going on
Keep in automation and virtualization didn’t negate the need for system administrators, it just increased the number of servers administrators could reasonably handle in a given day.
This is very true; automation can definitely drive a productivity boost. I see a common thread in the comments – 3 of them now – expressing concern about Tier 1 not getting the experience necessary to progress in their career, so there’s almost certainly something to that, and we’ll need to find a way as an industry to make sure we don’t accidentally dumb down.
I wonder if we see the same level of growth in the network space as we do in the server space? If you have a service that grows from 100 VMs up to 6000 VMs, does it require a similary factor-of-60 growth in the network to support it? Or do we just bolt another link to an existing LACP channel and handle it?
Thanks for sharing your thoughts!
Really depends on the amount of bandwidth and sessions those additional VMs generate, doesn’t it? End the end that’s all that really matters to a router.
Seems overly pessimistic. I see a shift coming for sure but saying that we will a self-diagnosing, self-healing network is way too optimistic. The reality is that the use cases that we are seeing right now are almost entirely data center-based, and that makes sense. But at some point, one of these data center needs to talk to something else, and that’s where it’ll break and where you come in.
I see that the next generation of the Data Center engineer will be a generalist that will be able to build an OS, a virtual host and the network. He will be able to troubleshoot most of the problems that happen, and then fall flat on his face. I don’t know about you, but my ‘VM Admins’ can’t troubleshoot Linux, they can install the OS and configure basic services and solve a few issues. That’s where my *NIX admins come in. What makes you think it will be any different with the network? Even if you automate the whole deployment, there will be special requirements, and specific configuration to do, and somebody to oversee that one of the “clickety-click” admins isn’t opening permit any any, all of those are you.
All of that is also discounting the potential explosion in resources to administer (how many more servers does your shop have to administer now that you can have a way to deploy them faster [ie: virtualization]? What makes you think that the amount of ports/policy/rules/etc we will need to administer won’t do the same thing?
VMWare didn’t mean the death of the server admin. SDN won’t mean the death of the network admin. I will mean different roles, but not that much.
Are you suggesting that the marketing departments are, shall we say, overstating the case a little? 😉
You ask good questions about the scaling of network rules etc to support growing server demands. To me that’s exactly where SDN is of benefit, and actually where the templated policy approach starts to pay off. Once defined, it’s much easier to apply a standard template as you expand the servers in a given application role, than to do all that by hand. Things aren’t necessarily more complex, but I see the same thing being applied more times, and quite possibly in a more distributed format.
This may well be the case; so if I skill up on the VM side, I can take over some responsibilities from the server admin team. Or if they learn networking (stop laughing in the back), or enough networking to apply a template, then they can perform some of my role too. And that’s fine, but we need to have our eyes wide open about how this could potentially play out.
Thanks for making some great points!
Interesting article and comments thus far. A couple of observations…
There is still an ‘F’ in NFV and there is still an ‘N’ in SDN. So at the functional level, regardless of where they sit and what they do, function separation is still important and the functions that get defined still need to interact with one another, and possibly, you might still want those functions individually to come from different vendors. The danger is that, when your function is no longer couple with a honking great lump of hardware, and instead turns up on a disk, unless you end up buying everything from one place, the debugging and interop process may become more difficult – anyone tried to out a trace within a cloud yet? If an interface between two functions is virtualised, can you plant Ethereal in the middle of it?
That is why the N in SDN also remains important, particularly from the perspective of this post. We may draw a Cloud as a cloud on a powerpoint slide, but beneath there still has to be a lot of old school network underpinning it. Are you going to put your entire cloud in one geographic location? As a (admittedly geeky) customer, I hope not if for no other reason than georedundancy. So on that basis, you are still going to have physical connectivity to configure, and a physical network to manage.
So what all this boils down to is decoupling of software function from hardware implementation. It doesn’t mean hardware vanishes. It just means what we care about shifts a bit, and if we are network engineers, who we are employed by changes.
Same problem everyone else is going to have when robots finally replace all the jobs.
Remember though that someone will still need to build and maintain those robots, and while a basic tech could probably do that job 80%, we’ll still need the experts for the last 20%.
So that’s where I see myself in 5 years: the expert engineer called in when something really goes wrong. Will I have to learn new skills to keep up? Yes, but when did I not need to do that in IT? That’s par for the course at this point.
Even more of the work will likely be outsourced to MSPs than is today, especially when the networking component becomes even more set it and forget it than it already is. Again though, this is nothing new. Even back 15 years ago the talk was our jobs were going away due to outsourcing and offshoring. Look at us now and were still making top dollar with lower degrees than our peers and have more open slots in the job market than the expected grads can fill. If python is the skill to learn to keep up, well, programming is an important part of being a nerd and a little .py would probably do everyone in the industry some good.
Just one carefully constructed virus away from complete destruction. Good luck with this.
Same as with the autonomous vehicles, etc. Still, I’m sure everybody has learned a lesson and is putting security first these days. Number one priority. For sure. Probably.