Oxidized and Github, cloudify your lab config management!

I spend a fair amount of time building labs and then breaking them.

Unfortunately my constant tinkering can bite me in the ass when I’m trying to remember how I got something working a while ago.

After numerous lab rebuilds (of both hypervisors, virtual networking, storage etc) I got really irritated with how much time I was spending on keeping up and fixing the infrastructure when the whole goal was actually to be labbing something up!

So I took some baby steps.

  1. I organized my scripts
  2. I put my scripts on Dropbox (yay for consistency!)
  3. I also started pushing my scripts to GitHub (yay versioning!)
  4. I started to try automating as much of my homelab build as possible
    1. I created powershell scripts for building/rebuilding my vswitch environment
    2. I created/updated a script that I found for finding and re-importing old .vmx files into vcenter
    3. I created several templates for deploying VMs
    4. (Need to do), start using scripts for deployment of VMs
  5. I set up Oxidized to grab all of my network configs and store them and diff them

 

So we’re making progress here, right?  Starting to feel like a 21st Century Network Engineer.  Well, since we’re always out to improve ourselves, lets keep at it.

Having all of my configs in a central spot is great, but I have to be at home (or VPN’d in) and it makes it difficult to easily share them.

Oxidized was already using Git, so my thought was, “gee, it would be nice if I could just shove these things into GitHub somehow…”.

So I wandered over to https://github.com/ytti/oxidized and started perusing the documentation again.

I found a great mention of Hooks, otherwise known as event driven integrations, or integrations that we can kick off if certain criteria are met.

**Disclaimer** While, at some point in the future, I may go over a more detailed install of Oxidized, here I am just going to cover the GitHub integration.  This walk-through assumes that you already have Oxidized running and working in a local-only fashion.

The Config File:

First, we need to do a few things to make this integration a bit more manageable.  By default the git output section of the Oxidized config will try to give you a seperate repo for each group.  The result of this, is that when you start pushing to GitHub, you’ll end up with a flat file structure and no organization.  This may work for you, but for my organizational tastes, I’d rather have each of my groups simply represented as a folder with the relevant configs inside of it.  This is accomplished by using the ‘single_repo: true’ flag in the output section as captured below.

Now we need to update the config file to push to GitHub.  This is accomplished by creating a ‘Hooks’ section, as follows (in the config file).

Now what will happen is whenever a post_store event occurs, we will to a git-commit to the remote repo.

But how do we auth to GitHub?

Okay, no sneaking that one past you.  Allegedly you are supposed to be able to use HTTPS auth with a username/password (defined as params in the hooks block), however I was never able to get them working.  When you don’t specify them Oxidized will call git as the currently running user and try to post them over SSH.  In order to get this to work, you need SSH keys.  There are about a billionty examples on the internet for how to setup SSH key auth, but here’s a helpful link that shows how to create the keys as well as how to add them to the ssh-agent:

https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/

Once you’ve created the keys and added them to your ssh agent, now you just need to update GitHub with your new Keys- again a lovely writeup by Github:

https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/

 

You created a repo on GitHub right?

Create a repo on GitHub with the same name as the once you specified in the ‘Hooks’ section.

 

Cloudy Configs for all!

Once that is all set up you can kick of your oxidized service, and your configs should start popping up in there.  If they aren’t I suggest running the oxidized agent in debug mode to find out what is going on.

Be smart kids-

Its worth noting that by default GitHub only allows public repos (unless you have a paid account).  If you’re going to post configs to GitHub on a public repo, make sure there isn’t an sensitive data on there (passwords/keys/etc), or just spend the $7/month and set up a private repo.

Congratulations!

Now you have configs that are backed up, and easy to share- not to mention whenever you’re updating your lab your GitHub account is registering commits, so you kind of look like a rockstar.

 

 

lab_repo

lab4_folder

commit

 

PowerShell, HTTPS/SSL, and Self-Signed Certificates

I ran into this gem the other day and figured it was worth bitching about documenting here.

TLDR at the bottom.

For reasons that aren’t relevant to this blog I spend a fair amount of time writing PowerShell scripts to automate interactions with some networking equipement.

As you saw in one of my previous posts, I covered how to use PowerShell and POSH-SSH to log into a wad of boxes at the same time here.

Now that might work in a pinch, but lets face it- that is a hack only suitable for instances where a vendor hasn’t implemented a robust API for interacting with their devices.

Now, suppose you have a piece of network gear that does have a robust (REST) API, and you want to use that for automation like a good engineer.

Obviously you’re going to use it, right?  Right?  RIGHT?!

Okay, so you’re using the API, and you’ve written some scripts that interact with it, and you’ve automated the shit out of everything and you’re feeling good, but an auditor/PM/PO takes a look and the following conversations occurs:

(T=them, Y=you)

T: “How do you know who is hitting the API and sending commands to these devices with your script?”
Y: “Oh we ask the user for authentication as part of the script launch, then we use that data to get an authorization token from the REST API, all interactions that use a given token are tied to the credentials used to get the token.”
**(side note: you’re not hard-coding creds are you?  If you are you’re bad and you should feel bad)**
T: “That’s good, but what protocol is the API using?”
Y: “Oh, well we are just using the Invoke-RestMethod cmdlets from PowerShell, so I guess it defaults to HTTP now that I think about it.”
**(queue alarms and screaming and general chaos in your head)**
T: “Well HTTP is insecure, and if you’re using it for authentication somebody could steal your credentials!”
Y: “You’re right, but I noticed the API supports HTTPS, so I’ll just update the script to use HTTPS instead, should be a two minute fix.”
T: “Perfect, but before you do that we’ll want to make sure that we get it assigned to a sprint at our next syngery session tomorrow.”
Y: “Uh, it will only take a minute.”
T: “Great, so the number of effort points will be low.”
Y: “But I…”
T: “Can’t wait cover this tomorrow!”
Y: “::sigh:: Sure thing.”

Of course you start looking at it right away!
All we need to do is change  Invoke-RestMethod -URI http://myserver.myorg.foo/apistuff(...)  to Invoke-RestMethod -URI https://myserver.myorg.foo/apistuff(...)

Then we do a quick test run and…**explosion**

What the hell do you mean “The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.” ?!

Then you think.

Oh, PowerShell must want me to use valid certificates, and my appliance just uses a self-signed certificate.

Coolio, we’ll just flip the -insecure flag and be done with it.

What the hell do you mean there is no -insecure flag for Invoke-RestMethod?!   Curl has one!

Okay, well now I have two options:

  1. Install valid signed certificated on all of my appliances running HTTPS APIs
  2. Find a way to get PowerShell to ignore certificate errors

Well, number is certainly a possibility, but many vendors don’t support changing the self-signed admin cert on their HTTPS pages, and also, you have hundreds of devices that would need this (you could automate it with the API, but oh. damnit.  Serious chicken/egg situation here.)

Okay, so now we’re at, “How do I disable the certificate validation?”.  Well you probably did some googling and found this little one-liner (or similar):

You run your script again, and now you don’t have TLS errors any more and your authentication is happening over HTTPS!  YES!

Unfortunately the fun stops there, any subsequent calls you make over HTTPS fail with the cryptic message: “The underlying connection was closed: An unexpected error occurred on a send.

Great, what the hell is this?

Well it turns out the hack above only works for the first call.

So now we do more Googling and find there are some better options, like adding this to the top of your script:

So you run it and get the same failure.  What the hell?  They said it works though!

Well as it turns out, this little diddy:

has some lasting effects for the length your PowerShell ISE is open.  So remove that line, add the function above, save your file, and restart your PowerShell ISE/powershell terminal, then re-run the script.

POW!  Everything works now!  Time for Scotch!

TLDR;

This has semi-permanent effects if your initial flailing/googling had you try this.

Put the code below at the top of your code, save the file and re-launch your ISE/powershell session.

Now you’re good to go!

Dude! Where’s my mod?

Howdy reader.

Okay, its a been a while since I posted anything here.  A really long while.  Like “why the hell are you paying for this hosting” long while.

I apologize.  I could make some excuses, but really I’m just lazy.*

*(excuses may include by are not limited to: bought a new house, got quite busy at work, less lab time, had to install some french drains, build a chicken coop, mow the yard, whack the weeds, help customers, build a new lab server, walk the dog, buy a chainsaw, cut down some trees, re-build the lab server, chop more weeds, assemble furniture, blah blah blah.)

I promise I’ll try to post stuff a bit more regularly.

Syncing your SSH/RDP sessions with Dropbox and MobaXterm

I do most of my work from home.

Being a network guy, I have a pretty good ability at memorizing IP addresses, however I’ve grown tired of this.  So now I use MobaXterm for managing my SSH/RDP/other sessions.

That is great and all, but I run into situations where I need to show a customer or colleague something from my lab environment when working on my laptop remotely.

I could always VPN home, then remote desktop to my PC, but I’ve had issues with from laggy wifi, etc.

So now I have a simple hack that I threw together so that all of my MobaXterm sessions will sync between my desktop and laptop.

So, you need the following two things to make this happen:

  1. MobaXterm must be installed on both machines
  2. Dropbox/OneDrive/Owncloud/sync service of your choice must be installed on both machines (I use Dropbox)

Once you have completed both of the following all you need to do is copy your MobaXterm.ini file to a sync folder and create a symlink!

On your first PC launch an administrative command prompt and do the following:

Now all you need to do on your second (and any subsequenct) PC/Laptop is launch an administrative command prompt and do the following:

Now, whenever you create new sessions the INI file will be updated and synced to all your other machines!

It is worth noting that in order to see new sessions that have been synced you’ll need to close and re-open MobaXterm on any remote PCs (so that the updated INI file is read).

Now when I am remote I can just fire up my VPN and use MobaXterm just like I would at home, and this is extra nice because with MobaXterm you can configure sessions for just about anything (SSH,Telent,RDP,Mosh,VNC,Serial,etc).

Yo Dawg, I heard you want to run dual stack.

“Hey Grumps, it seems like most of your posts aren’t actually so much about networking as they are about tools, what’s the deal?”

First- Shut up, I don’t even like you.

Second- I think I have a tendancy to write about solving problems, and creative tool use/fabrication interests me more than ‘switchport trunk vlan add 666’ (<– although there are plenty of fun stories behind remembering to use this command correctly, amirite?)

Third- Fine, you wore me down, this one’s about networking.

Just the other day, in my lab, I was working on some cool IPv6 translation tools (DNS64/NAT64), and wanted to really see them start to work.  My home network has historically been all IPv4 with some IPv6 in the lab, so this obviously presented some interesting obstacles.  So to solve the short term need I spun up a dual stack apache/bind server and handled my tests that way.  However that gets pretty boring fast, and I wanted to test out DNS64 against something more real-life, like the internet.

My home firewall is an SRX100 which receives a DHCP address from Comcast (Relevant).  Currently it only does this with IPv4, so I figured, what the hell?  Let’s turn on the dhcp client for IPv6 on this bad boy- how hard can it be?

Hubris, Grumps, hubris…

The summarized version of the story is:

  1. Get the software
    1. JUNOS 11.whatever doesn’t support the IPv6 dhcp client
    2. Hey, JUNOS start providing support for that in 12.something lets upgrade!
    3. Commence upgrade
    4. Fail upgrade and make the box unbootable
    5. Shit.
    6. Find out that I ran out of memory on the device (this can be a problem on the smaller boxes, but seriously, WTF there’s no free space check?)
    7. Run a manual install from a USB
    8. Find out that config managed to be saved (not a big deal, I have backups, but nice nonetheless)
    9. Box now back and running
  2. Configure the software
    1. Well this should be easy, we’ll just turn the client on
    2. Realize, ‘hey IPv6 is a protocol for adults, its not just turn it on’
    3. Get the various configurations in place

       
    4. Try to commit
    5. Read commit errors about ‘dhcpv6-client configured not compatible’ (WTF?)
    6. Look at base minimum configuration from Juniper (here)
    7. Apply base minimum configuration
    8. Try to commit
    9. More goddamn commit errors
    10. Do more digging
    11. Find this gem (here) about the dhcpv6-client and dhcp server (which I’m running for the home network) being incompatible
    12. Migrate to new dhcp server config
    13. Delete old dhcp server config
    14. Commit
    15. Shit works (details summarized/anonymized)

       
  3. Okay, now I’ve got dual stack running on the firewall, now we just need to extend it to the lab
    1. Add an IPv6 address SRX facing my lab router
    2. Add an IPv6 address to my lab router that faces my SRX
    3. And on the other interface of the lab router that faces all of my vyos vRouters
    4. Set up BGP on the SRX to peer with the lab router over IPv6
    5. Set up BGP on the lab router to peer with the SRX over IPv6 (and while we’re at it, we’ll configure the lab router for peering with the vyos vRouter that sits in front of this particular lab
    6. Add an IPv6 address to the vyos vRouter
    7. Then configure the vyos vRouter to do BGP upstream over IPv6
    8. Commit and save your stuff
    9. If everything is working you should see something like this:
    10. To verify that is actually what we want, lets verify the AS path of the route
    11. Hot damn.
  4. Oh, you actually want IPv6 Internet?
    1. Since IA-PD doesn’t work correctly, and I’d prefer to keep my IPv6 labs static (since they’re using private addressing) we’re going to do something a little gross here, 6 to 6 NAT.  To accomplish this, we’ll simply create another rule for source NAT on the SRX (its not pretty, but it works and does what I need for now):
    2. Now enjoy all that is IPv6 Internet
    3. Just kidding.  After all this you still notice problems.  You need to enable IPv6 on all of your intermediary devices as it may not be enable by default
    4. On our SRX we need to do this:
    5. And on our Cisco device we need to do this:
    6. Now it works.  For realzies.

Its a good thing that I work at home most of the time, as the (not-so)little snafu with the SRX took down my home internet for most of the day.

I can also configure the SRX for prefix delegation, however the problem is that the SRX doesn’t have the ability (as of 12.X46-D40.2) to configure the length of the prefix you would like delegation for, so it grabs the /64 and starts handing out /80s which is pretty ghetto (and I know that Comcast will hand out a /48 for delegation if your client requests it).

 

 

 

 

Using Powershell and Posh-SSH to GSD.

I’m a guy who thinks you should use the right tool for the job.

For instance, if you’re in a Windows environment, and you need to script something, installing Cygwin just so you can use BASH is probably not the right way to go.

That being said, I love BASH, but I find myself in Windows environments more and more, so I figured it was time to dust off my PowerShell skills- Especially when I found out about Posh-SSH.

Posh-SSH is one of the (for now) missing SSH libraries that Windows/PowerShell so desperately needs.

You can use Posh to create SSH sessions, send commands and teardown ssh sessions.

This script can log into several devices simultaneously, then run several different commands (of your choosing) on each of the devices, then either display the output on the console, or write to a log file for each device.

It is still a work in progress, and I will probably add a few things to it like:

  • Choosing the directory where the log files go
  • Doing various console cleanup based on the device you are logging into
  • Have all of the input via console (instead of pop-ups)

A few limitations:

  • The devices that this is run against must have the same user/pass
  • The devices that this run against need to support the same commands (or you won’t get any valid input
  • Commands that expect a break to stop (like ping in Linux), will keep running as we are unable to pass a break at this point in time

Find the script below, enjoy!

 

IPSEC, SCEP, NDES and other stuff you probably wish you hadn’t heard of

 

I’m in an environment now where I have to proof-of-concept complicated/large ideas for environments to prove their feasibility.

The latest project:

  • High performance IPSEC for site to site encryption
  • BGP over the IPSEC tunnels
  • Use of certificates for authentication
  • Automatic enrollment/renewal of said certificates
  • Verification of certificates via OCSP

 

Well then, lets get started.

IPSEC and BGP aren’t much new and are pretty straight forward.  However, most places I’ve ever been have used Pre-Shared Keys for their IPSEC as using certificates means that you have some sort of CA infrastructure and can require a lot of overhead.  Getting static certificates to work is one thing, but getting them automatically created and renewed is another thing entirely.  Then there’s OCSP, again typically not used because of the infrastructure that is required to support it.

First, the topology:

IPSEC-Topology

 

We have the following:

  • a vRouter (vyOS, which deserves its own post for how awesome it is)
  • a Windows 2012 server running AD and a compliment of Certificate services (which we will cover)
  • two virtual appliances running IPSEC that are running eBGP between the vRouter (and each other)

Phase 0 – What is in (and out) of scope for this post

This post isn’t intended as an all inclusive guide to setting up the underlying environment, but rather configuring the pieces that exist on top of said environment.

A quick summary of the underlying environment:

  • 2 Dell R810 ESXi 6 servers managed by VCSA 6
  • Virtual router provided by vyOS
  • The ‘Internet’ as depicted in the diagram is an upstream router that has access to other lab networks (as well as the upstream Internet)
  • All of the point to point networks are vSwitches (actually a dvSwitch) with a corresponding backend VLAN on my lab switch

Phase 1 – Setting up the underlying network architecture (and making sure it works)

First lets make sure that we get the vRouter properly configured, here is what it should look like when finished.

Now that we have that portion, lets go ahead and configure the A10 devices (IPSEC-A and IPSEC-B)

 

IPSEC-A

IPSEC-B

Now that we have that configured, lets make sure that IPSEC is working, and that BGP is working.

From here we can see that both A10 devices have peered with the Lab6 vRouter and are both advertising two networks.

And if we take a deeper look:

Based on this, we can determine the following:

  1. BGP is working between the Lab6 vRouter and both A10 Devices
  2. IPSEC is working between both A10 devices
    1. We know this because of the path of the learned networks from the A10 peer, which could only be learned over the IPSEC tunnel

But still, its probably worth confirming IPSEC on the A10 devices anyway:

On IPSEC-A

And on IPSEC-B

Furthermore, lets verify that we are learning routes via BGP over the IPSEC tunnel:

IPSEC-A

and on IPSEC-B

So, IPSEC is up, and BGP is working perfectly. Lets move on to Phase 2.

Phase 2 – Deploying Windows and Active Directory

I’m going to give you the cliff-notes version here, because you either already have AD installed and running, or you can go to some MSFT blog to figure it out.

So…  Install your Windows 2012 server, deploy Active Directory, and make sure that it is reachable by your lab environment (I attached mine to the ‘management’ network.

Phase 3 – Configuring Certificate Services for Windows 2012

This is where it gets a bit hairy.  Windows Server makes it really easy to deploy services by just choosing to add a feature.  Unfortunately sometimes it can be really confusing.

The good folks at Microsoft have published a pretty good guide for configuring a CA, and OCSP responder and NDES (SCEP) server.

You can find all of that hotness here:

https://technet.microsoft.com/en-us/library/cc772393(v=ws.10).aspx

Unfortunately NDES seems to have been a bit of an after-thought for the Windows Server environment, so once you have the role set up, the only way you can make meaningful configuration changes is via registry keys (and then restarting the IIS service).

However, once again, the Microsoft folks come through with more solid documentation and have a posting that shares everything you ever wanted to know about NDES (including appropriate registry keys!)

More hotness here:

http://social.technet.microsoft.com/wiki/contents/articles/9063.network-device-enrollment-service-ndes-in-active-directory-certificate-services-ad-cs.aspx

 

Here are some of the common problems that I found when setting this up:

  1. OCSP
    1. Did you remember to do the post deployment tasks available from Server Manager?
    2. Have you created your Revocation configuration? (available from the Online Responder MMC snap-in)
    3. Did you remember to create a copy of the Online Responder Template that allowed auto-enroll and put it into the list of available templates for your Certificate Authority? (this can be done using the Certificate Templates and Certificate Authority MMC snap-ins)
    4. Did you verify that it is actually working? (you can do this by creating a certificate and then running the following command against that cert ‘certutil -URL <fullcertpathhere>’ then checking the revocation information)
  2. NDES
    1. Some devices (like the A10’s) don’t yet use the old certificates for the renewal process and will require re-use of the old password
      1. You need to enable password re-use
      2. You probably want to make that password much longer, since it can be re-used
      3. You probably need to increase the password cache to the number of devices you can support so that you don’t have to re-use passwords
      4. You probably want a certificate that doesn’t last two years for your IPSEC devices (plus we’re auto-rotating so its not like we personally have to do it (do this by creating a copy of the IPSEC (offline request) template and making appropriate changes
      5. See the registry configuration values from the NDES link above to change the first 3 items and to update NDES to point to your new certificate template for the fourth

 

 

Phase 4 – Configuring the A10 devices to use certificates

Now we have a few things that need to be done:

  1. Install the trusted Root CA on the A10 Devices
  2. Configure SCEP on the A10 devices so that they can automatically get and renew certificates
  3. Configure OCSP on the A10 devices so that they can actually verify that the certificates haven’t been revoked
  4. Update the VPN config to use the new certs

Install the trusted root CA (you know from the CA you created earlier) on both of your A10 devices.

There are a couple of ways to do this (FTP, TFTP,SCP,SFTP or GUI).

Since all of the command line ways require you have another server from which to upload, I suggest using the GUI, where you can upload the file directly.

Log into the GUI on your A10 devices and then go to ADC–>SSL Management–>Import

Choose the name that the A10 will reference the CA by (I choose IPSEC-CA), choose the appropriate options (don’t forget to click the radio button for CA certificate) choose the file that is your CA Cert (that you downloaded from your CA) and click import.

 

Configuring SCEP on IPSEC-A
The password will be the password that you received from the URL on the http://<your NDES server here>/mscep_admin/
You will should get a new one for each device you enroll

Configure SCEP on IPSEC-B

Now let’s verify that we actually got our SCEP certs:

Verification on IPSEC-A:

Verification on IPSEC-B:

Woo-hoo- we got our SCEP certs!

Now let’s configure OCSP for checking certificate revocation:
On IPSEC-A

and on IPSEC-B

Finally, lets update our VPN to use all this new auto-certificate hotness!

On IPSEC-A

and on IPSEC-B

Now, once you’ve done this, you’ll want to verify that it is working with the same VPN commands that were shown earlier.
If you are having trouble getting the tunnels to come up you can troubleshoot by using the following commands:

These will set the debugging to the highest level possible for VPN (which will show you keys so you can verify they are working correctly)
The log command will follow your vpn debug packet by packet and will provide helpful information as to whether or not you are failing auth (or something else like OCSP)

 

I hope this was somewhat helpful.

Time for…

Phase 6 – Go get a beer.

Or a Scotch.  Or both.  Yeah, probably both- better play it safe.

iDrac, RACADM, sshpass, and BASH

If it were up to me, I suppose that the only thing that I’d really be responsible for would be core networking infrastructure (and consumption of craft brews).  Unfortunately that is not the case.  These days (and since the inception of the NetEng), Network Engineers are presumably responsible for anything that even remotely network related.  Server needs and IP?  Bluetooth whatchmacalit isn’t pairing with the thingamadoober?  Toaster with an IP address is constantly burning toast?  While we have and provide tools so that most people can handle these things on their own, unfortunately we still end up with a lot of these tasks.

When we ultimately get saddled with these tasks, the best thing that we can do is find the quickest/most efficient way to handle them.

Enter Dell and the OpenStack project that my team is undertaking. My main responsibilities in this project are handling the build-out of the Underlay network (you know, what we used to call ‘The Network’) as well as SDN components of OpenStack.  Remember the first paragraph where I note that the network guys ultimately handle anything remotely network related?  Yeah.

Now first, let me state this: I work for a large org with a small team of what I would consider to be Special Forces of Infrastructure, unfortunately there just aren’t enough of us.  So my normal server guy didn’t have the cycles to spare when the vendor we chose to implement OpenStack required us to provide the MAC addresses for all of the network interfaces of the 100+ Dell servers we just got.

So, it fell on me to GSD.

After creating list of IP’s to assign to the myriad of servers, and working with remote hands to get IP connectivity working, I still had to set various other settings, as well as get the requested MAC address information.

Enter RACADM.

For those who don’t know, you can actually SSH into iDrac and be met with a bevvy of (RACADM) commands at your disposal for getting data, as well as setting it.

My tasks were set as follows:

  1. Set the hostnames for iDrac on all of the servers (remember remote-hands only did the base IP configuration)
  2. Set the DNS servers and enable dynamic registration of hostnames (granted we could create static A records with powershell, but that wasn’t what we wanted and would require a separate workflow)
  3. Set iDrac to use a Tagged VLAN and run off of LOM3 (this enabled us to utilize 3 cables instead of 4 [2x 10GBE for data, 1x 1GBE for PXE/admin/iDrac instead of dedicating a second 1GBE for iDrac])
  4. Enable PXE booting from LOM3
  5. Get the MAC addresses for LOM3

 

So in order to facilitate all of this I needed to be able to ssh into all of these machines in an automated fashion-  sshpass makes this possible (sorry they host the project on SourceForge).
**Security note, sshpass allows you to put the password into the SSH command/script which is inherently insecure.  Ideally you would use public key auth to facilitate this work, but we don’t spend much time in iDrac, and typically it is only used for remote console which is only available via the web interface.  iDrac does have provisions to allow for public key auth which you can read about here.  It won’t be covered in this post.**

**Usability Note, sshpass is a utility for OS X/Linux, so you’ll need one of those to make this happen, although I understand you should be able to do something similar with putty, but the scripts we are using will be BASH, which requires OS X/Linux unless you want to run cygwin, which won’t be covered**

So, grab sshpass from sourceforge:

bud@black-box:~$ wget -O sshpass.tar.gz http://downloads.sourceforge.net/project/sshpass/sshpass/1.05/sshpass-1.05.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fsshpass%2Ffiles%2F&ts=1423675742&use_mirror=iweb

Untar/gzip the file:

bud@black-box:~$ tar -zxvf sshpass.tar.gz

Move into the directory and run the installer:

bud@black-box:~$ cd sshpass-1.05/
./install.sh

This will put sshpass into your path for general consumption.

Test sshpass:

bud@black-box:~$ sshpass -p calvin ssh -n -o StrictHostKeyChecking=no -l root 10.1.1.101 racadm racdump

This should dump a ton of stuff from iDrac on that particular host.

Okay, so now that sshpass works, lets get some stuff done.

Lets create a helper script (we’ll call it idrac.sh) that we can call to execute log in and execute our commands:

#!/bin/bash
#first argument is host
HOST=$1
#there may be many arguments so we use shift to iterate through them until the end
shift
#we then use the '$*' to treat them all as one argument
CMD=$*

#login information for idrac
USER=root
PASS=calvin

#put it all together
sshpass -p $PASS ssh -n -o StrictHostKeyChecking=no -l $USER $HOST racadm $CMD

Okay, now we’re ready to do some work.  So, lets set those hostnames and DNS settings.

First we need to create a .csv (called hostlist.csv) with the data we want, formatted as server IP,DNS Name.  It should look similar to:

10.1.1.101, remote-server-1
10.1.1.102, remote-server-2
10.1.1.103, remote-server-3

Now lets write another script (called set-dns-settings.sh) to set everything:

#!/bin/bash

RACADM="./idrac.sh"
csv_input=$1

while IFS=, read IP dns; do
$RACADM $IP set iDRAC.Nic.DNSDomainName foo.bar.org
echo "${IP} (${dns}) domain set"
$RACADM $IP set iDRAC.IPv4.DNS1 10.1.1.1
echo "$IP (${dns})DNS1 set"
$RACADM $IP set iDRAC.IPv4.DNS2 10.2.2.2
echo "$IP (${dns})DNS2 set"
$RACADM $IP set iDRAC.Nic.DNSRacName $dns
echo "$IP (${dns})hostname set"
$RACADM $IP set iDRAC.Nic.DNSRegister 1
echo "$IP (${dns}) DNS Auto Registration set"
done < $csv_input

Run the script like this:

./set-dns-settings.sh hostlist.csv

You should receive output similar to:

10.1.1.101 (remote-server-1) domain set
10.1.1.101 (remote-server-1) DNS1 set
10.1.1.101 (remote-server-1) DNS2 set
10.1.1.101 (remote-server-1) hostname set
10.1.1.101 (remote-server-1) DNS Auto Registration set
10.1.1.101 (remote-server-2) domain set
10.1.1.101 (remote-server-2) DNS1 set
10.1.1.101 (remote-server-2) DNS2 set
10.1.1.101 (remote-server-2) hostname set
10.1.1.101 (remote-server-2) DNS Auto Registration set
10.1.1.101 (remote-server-3) domain set
10.1.1.101 (remote-server-3) DNS1 set
10.1.1.101 (remote-server-3) DNS2 set
10.1.1.101 (remote-server-3) hostname set
10.1.1.101 (remote-server-3) DNS Auto Registration set

 

This script will read the IPs and hostnames from the CSV file, then do the following for each line:

  1. Set the domain
  2. Set DNS Server 1
  3. Set DNS Server 2
  4. Set the hostname for iDrac
  5. Set iDrac to register its hostname in DNS (this requires a DNS environment that supports this)

It will report back after each entry has been updated.
**Note, for a large number of servers this may take a while, the iDrac SSH isn’t very quick.  Also I understand that this could probably be done more efficiently by putting more commands into a single racadm session, but all I need to do is start this script and go do something else for a while so I’m not particularly concerned about duration.

Okay, so now I’ve completed steps 1 and 2.  Now we need to move iDrac to use a tagged VLAN on LOM3.

#!/bin/bash

csv_input=$1
VLAN=800

RACADM="./idrac.sh"

while IFS=, read IP dns; do
#set idrac to use LOM3
$RACADM set iDRAC.NIC.Selection LOM3
#allow idrac to auto-detect (this will help to prevent us from locking ourselves out)
$RACADM set iDRAC.NIC.AutoDetect Enabled
#enable VLAN tagging on the iDrac NIC (this applies to whatever interface is used for iDrac
$RACADM set iDRAC.NIC.VLanEnable Enabled
#set the VLAN that will be tagged
$RACADM set iDRAC.NIC.VLanID $VLAN
#set the boot process to boot from PXE for LOM3
$RACADM set NIC.NICConfig.3.LegacyBootProto PXE
#reset the NIC to take the new settings
$RACADM jobqueue create NIC.Integrated.1-3-1 -s TIME_NOW -r pwrcycle
done < $csv_input

For the script above we will call it using the same .CSV file that we created earlier (even though we don’t need the DNS portion anymore).  This will then iterate through the entries and set all of the required interface settings above.

At this point we will have to coordinate with the remote hands (or potentially your own hands) to move all of the 1GBE cables from the dedicated iDrac port to LOM3.

After that is finished we are now ready to create another script (we’ll call it test-dns-ping.sh) test and make sure that all of our servers have registered in DNS and are pingable as well.  Lets give it a shot with fping (if you don’t have it, install it).

#!/bin/bash

csv_input=$1

domain=".foo.bar.org"

while IFS=, read IP dns; do
fping $remote$dns$domain
done < $csv_input

 

Notice we’re calling our .CSV file again, and we are iterating through all of the DNS names, testing both DNS resolution as well as availability.

Run the ping script as follows:
bud@black-box:~$ ./test-dns-ping.sh host-list.csv

You should see something similar to:
remote-host-1.foo.bar.org is alive
remote-host-2.foo.bar.org is alive
remote-host-3.foo.bar.org is alive

Now for our final task.  Getting the MAC address of LOM3 for all of our servers.  Lets create another script (named get-macs.sh):

#!/bin/bash

csv_input=$1

while IFS=, read IP dns; do
mac=./idrac.sh $dns racdump | egrep '^NIC.Integrated'
echo "$mac" | while read line; do
echo "$dns $line"
done
done < $csv_input

This script will return the MAC addresses of all however all we want is the MAC address of LOM3.  So all we have to do is invoke it as follows:
./get-macs.sh host-list.csv | grep 1-3-1

This will iterate through each of the hosts, grab the racdump, then parse out 1-3-1 which is the entry for LOM3.

This will return something like:

remote-server-1 NIC.Integrated.1-3-1    Ethernet                = 01:23:45:67:89:0A
remote-server-2 NIC.Integrated.1-3-1    Ethernet                = 01:23:45:67:89:0B
remote-server-2 NIC.Integrated.1-3-1    Ethernet                = 01:23:45:67:89:0C

 

Now we can take that data and give it to the folks who requested it.  Doing this manually would have taken a couple of days, but thanks to RACADM and some good ol’ bash we were able to bust through it in a few hours, and any new servers that come in are able to be configured it just a few seconds apiece instead of minutes.

 

Also, this is probably where I mention I wish we had been allowed to choose UCS for this project.

Dynamic DNS and you

Okay, so I know the popular thing with network engineers is to remember the IP of EVERYTHING.  I’m pretty good at it too.  But having a lab at home and needing to be able to deal with my provider using DHCP for my WAN IP I like to use dynamic DNS.

I’ve used DYNDNS for years, and they have a great product, albeit a bit expensive for my tastes.

Enter Google Domains.  $15 a year gets you registration, DNS, and some other fun stuff.

 

Google Domains Dynamic DNS Setup

This assumes you can figure out how to register a domain on google domains and get static DNS working.

Go to your domains area, choose the domain you are working with, then go to the ‘Synthetic Records’ section and choose ‘Dynamic DNS’ from the dropdown:

 

 

Then choose the subdomain you wish to have Dynamic DNS for and choose ‘Add’ (if you want the main domain to use Dyanmic DNS just leave the subdomain blank and click ‘Add’)

You should now have a new entry:

 

Click ‘View Credentials’ and note the Username and Password, as this will be needed later on.  It’s also worth noting here that these are specific for each Dynamic DNS entry (thank you Google for not making me put my admin login just so that I can use DNS, although to be fair, DYNDNS is moving to a more API-Centric approach as well).

 

ddclient Setup and Installation

**Special note, this is based on an Centos install.  Some systems support the install of ddclient via the apt-get install ddclient or yum install ddclient commands.  I would try those first, then skip to the ddclient configuration section (the configuration files are typically in the same place, or very similar).**
First head on over to the ddclient project page to get the latest copy of ddclient (sorry, its hosted on SourceForge).

You could probably tell curl to follow redirects and grab the file from their download link, but I just grab it myself then SCP it over to my linux instance that needs ddclient.

Once you’ve copied it to your linux box untar the file with tar -jxvf <filename>.tar.bz2

Now that you’ve got the files unzipped  cd ddclient-x.x.x/ (where x’s represent the version number)

Now lets make some directories:
mkdir /etc/ddclient
mkdir /var/cache/ddclient

Now lets copy the relevant sample files:
cp ddclient /usr/local/bin
cp sample-etc_ddclient.conf /etc/ddclient/ddclient.conf
cp sample-etc_rc.d_init.d_ddclient /etc/rc.d/init.d/ddclient

Add ddclient to chkconfig:

chkconfig --add ddclient

We’re done with that download we can nuke it now:

cd ..
rm -rf ddclient-x.x.x/
(where x’s represent the version number)

Sweet.  Now lets configure ddclient for use with Google Domains.

Open up /etc/ddclient/ddclient.conf in your favorite text editor (hint, its vi/vim) and make sure you have the following info:

####################################
####Google Dynamic DNS##############
server=domains.google.com
protocol=dyndns2
use=web, web=checkip.dyndns.com, web-skip='IP Address'
login=*******************
password='**********'
ssl=yes
foo.yourdomain.com
####################################

Google was nice enough to use the DYNDNS2 protocol so there is already a lot of support for doing Dyanamic DNS with them(eg ddclient)

The login will be the ‘Username’ that you noted when you created the Dyanmic DNS entry, and the password will be the ‘Password’ that you noted in the same place.

There are a lot of different websites you can use for getting your public IP, but I prefer dyndns, hence the ‘use=web section’.

SSL is required and your client will not update unless this is enabled!  Again, kudos to Google for getting this right out of the gate.

Finally list the domain that you want the dynamic IP for.

Once you’ve saved your file (:wq! for you vi/vim folks!) you can now update your IP from you server using the command ddclient

If all goes well you should see your dyanmic IP update in Google Domains under the ‘Data’ section (you will need to refresh the page to see the results).

 

 

 

 

A penny for my thoughts?

More like a dollar and I’ll shut the hell up.

This page is a collection of my thoughts and notes to myself, however it is for you as well, should you find value in it.

Mainly I’ll be documenting the minutia that bites you in the ass, as well as what I find in my path for my CCIE studies.

Oh, yeah.  I guess its time to start down that studying path.