Trials and Tribulations of an Infrastructure Rebuild — Part 1
As many of you know I was recently in charge of a large overhaul of our network infrastructure. This was a lingering project that we had planned on doing in December but because of hardware delays we had to postpone until now. We got the go ahead for a 24 hour shutdown from management and everything else aligned, so Easter weekend was the date. Not ideal for most people, but us technology folks are used to working around holidays and weekends because it is often the most convenient time to take things down. To make things even more complicated, not only were we replacing hardware, but we were re-wiring everything….so we stripped the server room and closets completely and started from scratch.
We decided to hire an outside company to help with configuration and implementation of the devices. Initial meetings went really well and the engineer knew his stuff (plus was a CCIE). I am pretty good with Cisco’s IOS but there are instances where I am happy to step back and hand off to the guys who do this every day, and this was one of those instances. Plus they were going to stage all of the hardware in their lab, burn in the hardware, and make sure everything was good to go.
I documented in detail how each of the devices needed to be configured: IP addresses, vlans, routing, security, protocols, etc., down to the individual port number. I am patting myself on the back a little here, but the documentation was really good, to the point where even if you knew nothing about our environment you could configure the devices. Getting the config right is very important because when you start plugging stuff in they need to be in a properly configured port or havoc will ensue.
A few weeks out, I inquired to make sure the hardware all arrived at their facility. Upon inspection they were missing a 10G fiber cable to link some switches. Turns out it wasn’t ordered, so I put the order in through their sales rep. A week out I inquire about the cable….order was never placed. I order again. A few days out from implementation day I made some changes to the configs and confirmed these with the engineer which he said “No problem”. I then inquire about the missing fiber cable and no-one seems to have a tracking number. Needless to say, I got a little cranky. The day before implementation I received the hardware (missing cable included), everything was supposedly good to go and tested. I crack open the boxes and guess what, we need 6-20 receptacles for some of these power cables. I have never seen non-locking 6-20 plugs before so was a little shocked to see these. Phone calls started to either find an adapter, PDU, or new cables. My last resort would have been cutting the ends and wiring up new ones, but wanted to avoid that. Luckily we found some L6-20 cables, so now we were good to go.
I worked late Friday night, until around 1:30am, prepping the server room and reviewing all the documentation for the following day. Caught a few hours of sleep and was back in the office around 6:30am ready to go. I shut everything down and then we started ripping everything out. Switches, routers, servers, cables…you name it, we ripped it out. That was the easy part, always easy to break things, putting it back together is the tough part.
Everything went back in fine. It took longer than we thought but that was more because I was being super anal about making sure things were installed properly. By properly I mean in the right spots, with redundant power spread across multiple UPS’s, and the cables labeled and run neatly and in an organized fashion. I wasn’t too worried about this taking a long time because the configs should be solid, so as long as we plugged things in as documented we should turn on the power and have stuff work.
Not so much. This post is getting long, so stay tuned for part two tomorrow.
Filed under: Cisco | Leave a Comment
Tags: configuration, documentation, Infrastructure, network, planning, router, switches