1/30/17

 

So The past few months for the home-lab have been fun. The home-lab used to be a sprawling L3 core with a Frame Relayed mimic’d cloud and a number of ‘remote’ networks and it worked great. But as anything in IT as you grow with the times you’re lab also grows or shrinks with the emerging technologies. So since SDN is becoming bigger and bigger I felt it was time to go to a completely L3 based Spine/Leaf topology and keep L2 to literally the L3 switch on the ports. So I shrunk the lab in favor of VMware vSAN and VMware NSX as the core fabric with VMware Enterprise Plus with VMware vCloud Director with vRealize Operations Manager as the core Software-Defined Datacenter (SDDC). On the outside it looks smaller, it’s not virtually, it just got way more complex and technical.

So I shrunk the lab in favor of VMware vSAN and VMware NSX as the core fabric with VMware Enterprise Plus with VMware vCloud Director with vRealize Operations Manager as the core Software-Defined Datacenter (SDDC). On the outside it looks smaller, it’s not virtually, it just got way more complex and technical.

On the outside it looks smaller, virtually that’s a completely different story….

NOTE: plants add humidity. 😛

Now even these pictures are outdated. As this past weekend I officially removed the Cisco 2960G’s and moved the Etherchannel or LACP to the Cisco 3750G’s to make it a completely L3 based network. I’ve defined the network slowly but surely inside of VMware NSX and slowly but surely I’ll be removing the 18 vLAN’s on the physical switches in favor of VMware NSX. The idea also is to get another Cisco 3750E and link it with the other 3750E with a dual 10G pipe for a true Datacenter core. The 3750G’s will be connected via (8) 1G EC bonds. However this is funny, recently acquired two Brocade 6610’s off ebay and I may actually use them as my new home-lab core since they come with (8) 10G ports natively, I might hook up the 3750E’s to the 6610’s in an LACP bond and put all of the servers EC on those. However what will probably happen is I’ll scour ebay for another set of 6610’s or 7250’s dirt cheap and make a complete Brocade network with Cisco spliced into it. Then make the whole physical layer completely 10G in-favor of VMware NSX.

Now recently in the past two months I got way more serious with focusing on the vLAN project with the vDS switches in preparation for VMware vCloud Director for a multi-tenant Windows Server Domain cluster. Going to be adding 3 domains as I had hoped for about a year ago.

In the past two weeks I finished the 18 vLAN’s for the physical to VMware vDS fabric and thus it only made sense to work on VMware vSAN. All (3) of the Dell R610’s have (1) 120 GB eMLC SSD and (3) 300 GB WD Enterprise Storage 10k Raptors all in RAID 0 on each drive, recently I added another R720 to the cluster but it’s part of the next generation of server I’m slowly acquiring for the cluster, just haven’t added it into the mix yet.

The total sizing across vSAN is like roughly ~2.5 TB’s of space. The QNAP NAS (both the 5 day and rear-mounted 2U array) are now Tier 2 storage and the vSAN is Tier 1 storage. Also recently added the vSAN to it’s own 10G network on the Brocade 6610 switch! … it’s so fast!

I think I might snag the WD 10k 1 TB drives and add them to the mix, as I recently had to but larger wattage PSU’s for the R610’s. I guess once you get above 24 GB of RAM per host with all drives used and all expansion slots in the rear filled with 10G DP and 1G Quad cards you use allot of juice, so had to get 717w PSU’s vs 502w. Which is fine since I’m upgrading all of the host to 96 GB’s of ram @ faster mhz speeds vs the original PC3-10600.

Also not so easy to see, but the Green wire in the pictures above the Sonicwall that goes into the Raspberry Pi 3 that has a 256 GB micro-usb stick that is formatted to hold all of my syslog and Netflow dumps, as-well as being the offline Sambra v4 Domain Controller with AD integration running on Ubuntu 14.04. Gotta love a $35 Quad core with 1 GB of ram.  A poor man’s Utility server, lol.

My desktops log into my G15IT.com domain and DNS via that little sucker, it’s impressive!

 

This past weekend on Sunday I finally deployed NSX to the lab and I’ve been tinkering with it as I study along with the Pluralsight videos. Once I get it fully working and up and running I’ll add it to this posting. I’m sure I’m going to find all kind of useful things to do with NSX. One thing I really want to do is load-balanced PSC appliances and if I can do that in NSX like I’ve heard you can make a software defined load-balancer and that’s amazing. Also, I heard you can deploy a complete software-based firewall and have no need for a Sonicwall, ASA, or Fortinet firewall; that I’ll have to see to believe. 😀

For some reason, VMware says that vSAN needs to be used on the default TCP/IP Stack, I’m going to make it bloody work on its own stack. I don’t want that traffic going over the same hash in memory. I want it completely separated. I love security!

Next up is to make a vMotion vLAN that is routable, cause after NSX comes…. guess what…. Amazon AWS…. going to make a Private with a Public Cloud with vExpert and VMUG licensing. Got the approval from VMware to figure it out!

Here is some of the VMware vDS magic, I was in the middle of adding the vLAN’s on each host when these were taken…

NOTE: The two powered-off PSC VM’s for the NSX Round-Robin Load-balanced deployment I’m determined to figure out!

NOTE: The TS Farm, going to map them to my static IP block so I can let some techie friends remote into my lab and tinker in the VMware cluster. 😀

 

To be Continued….