Trials and Tribulations of an Infrastructure Rebuild — Part 2


Everything is in and it is time to push the giant power button.  I had someone walk around and turn off every PC in the office earlier that day so we would not have anyone connecting until we confirmed the core was working properly.  I also avoided connecting up the external connections for the same reason.  After booting up a few servers I realize packets aren’t going where they should.  Hmm…

Upon further inspection I realize we have some address conflicts for the default gateway on the network.  Two devices think they are the default gateway, that is going to be a problem.  Fix that but still having issues.  I talk to the engineer and explain what is going on, he confirms everything is configured as I documented.  I do some more troubleshooting and find that one of the ESX trunk ports is not configured properly.  Digging in a little further and I find more ports configured incorrectly.  WTF!

Turns out the engineer used the old config on the core switches!  This means there are going to be lots of incorrect ports.  We fixed all the server ports and get services up and running.  I started testing vlan routing and some odd things were happening.  Turns out there was no security in place to stop traffic from one vlan to another.  Everything was wide open!  I bring this up to Mr. CCIE and he tells me that he wanted to open everything up until we were sure it was all working and then he would lock it down.  BULLSHIT!  We have networks that should not be freely talking to one another, and I outlined all of this in the documentation.  More importantly, you should lock stuff down and allow as needed, not the other way around.  He didn’t understand why I didn’t want to do this.  Argh!!!

Services back online, mail flowing, blackberry’s vibrating, backups running…..and it was 5:00am Easter morning.  There was still some services that needed to be brought online and tested, but Mail and Backups were the only critical one’s so we called it quits and headed home.

Got a little sleep before the start of family stuff then it was back to the office at 6pm to go through every line of every config and turn on the other services.  My blood pressure rose with each red comment I had to make…..and it is amazing I’m still alive because there were a lot of comments.  I fixed all of the major mistakes and got all the necessary services for Monday back online.  Called it quits around 2am.

Monday was great, a few very minor issues and no one was complaining.  Project was a success!  I did however take a few things away from this project.

1.  If I ever let someone else configure and stage the equipment I will be visiting the site to review in person rather than taking someone’s word for it.

2.  I will make sure that the configs be sent to me ahead of time for review.

3.  I will review configs again before going live with equipment.

4.  Trust no one.  😉


No Responses Yet to “Trials and Tribulations of an Infrastructure Rebuild — Part 2”

  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: