My home lab is really to help me learn the technology I am assisting customers with. I have used it to walk through VMware configuration steps & screens with a customer, write blog posts and as a place where I can break things without impacting anyone.
So far the family CFO has been very understanding of the money and time I have invested. I try to be a good steward of that credit!
Current Home Lab ( Mark IIa – April 2020 )
This version of my lab has a Management node and Compute cluster.
MacBook Pro Retina (16gb RAM, 1 TB SSD) – VMware Fusion
- Ubiquiti Controller
This is a 4-node cluster running vSphere 7 & vSAN.
3 nodes are:
- ASUS H110M-A/M.2 motherboard
- Intel Core I-5 7400 CPUs
- 32 GB RAM
- Intel Corporation 82571EB 1GB NICs
4th node – HP z620 Workstation
- Intel Xeon CPU E5-2620
- 96 GB RAM
- HP NC382T dual-port NIC
- Core Switch – Dell PowerConnect 6248
- Firewall – Ubiquiti Edge Gateway 3P
- Wifi – Ubiquiti AP-AC-LR (x2)
vSAN – Hybrid diskgroups
- Cache Device – SATA Consumer Grade SSD
- Capacity Device – SATA 1TB spinning disk (x2)
NFS – Synology DS1512+
- Seagate IronWolf – 4TB (x3) – capacity
- Samsung 128GB SSD (x2) – read/write cache
- vRealize Operations Manager
- vRealize Log Insight
- Linux Jumphost
- Folding At Home – VMware Photon Appliance – VMware Fling – Team VMware (ID 52737)
- Veeam Backup & Replication Community Edition – Veeam continues to be an excellent product! I am very thankful to Veeam for keeping a free edition of their product for vExperts and IT Pros to learn with.
- Using the Macbook Pro as a management node may seem like an unconventional choice. The display on the MBP doesn’t work so it was easily & cheaply available. It has a built-in UPS, and KVM with the external monitor as well as easy remote access. Most importantly – it is very easy for my family to power it back on after a power outage if I am away!
- The drives and controller in use for vSAN are not on the compatibility guide. I knew this going in. It does work well for what I need to do. The next major generation of the lab will hopefully address this.
- Ideally the compute cluster would be a matched set. The HP workstation was from Ebay, and too good of a value to pass up. It doubled my RAM and core capacity for under $400. The CPU made the cut for the vSphere 7 upgrade; but may not be supported in the future.
- I have used whitebox-based ESX for a long time. It does require some research on the HCL for the various components. One of the changes with vSphere 7 is the evolution of Update Manger to Lifecycle Manager. Lifecycle Manager has become aware of the host make & model. It uses this capability to check the HCL as part of the remediation process. This has great potential to identify upgrade problems before a customer implements the upgrade. This will impact vSphere Homelabs that use hosts based on whitebox components or Workstation class machines that have supported components. A whitebox machine doesn’t have a make/model. Workstation class systems do; but generally aren’t listed on the VMware HCL. This prevents Lifecycle Manager from working with these systems. I completely understand & support this change – as it benefits VMware customers. I was able to upgrade my hosts to vSphere 7 manually. Time will tell how i apply patches…
Home Lab .NEXT
Getting the RAM in my compute cluster up to 192gb has been very useful. Looking forward, I might be upgrading vSAN to all-flash. This should provide more IOPs as well as using supported components. Some items on my list to puzzle over
- MicroTik has an affordable 4-port 10gb SFP-based switch. They have an 8 port too!
- If I can stick with the motherboards I have, I will hopefully use the on-board NVME slot on 3 hosts and add a PCI-E NVME card for the 4th.
- I have resisted going the route of used Enterprise gear. This is mostly due to the noise/power factor. However if I have to rebuild my white-boxes significantly it may be more cost effective to go that route (A single large host that I can use to nest environments may be a good option)
Mark I – This was a single host (core i-7 and 24 gb of ram). It served me quite well running vSphere for about 4 years. It is still running to this day – as a desktop computer in my office.
Mark II – March 2019
- 3-node Compute cluster (whitebox nodes, each has 32gb RAM, core I5 CPU, and hybrid vSAN)
- Single-node Management cluster (MacBook Pro Retina running Fusion)
- Synology DS1512+ (12TB raw with r/w cache)
- Networking – Dell PowerConnect 6428, Ubiquiti Wifi APs and Edge Gateway