• Home
  • VMware QuickDocs
    • VMware General
    • vSphere
    • Skyline Advisor
    • vSAN
    • Horizon
    • NSX
    • vRealize Suite
      • vRealize Operations
      • vRealize Orchestrator
    • VMware Cloud Services
    • Podcasts
  • Home Lab
  • VMware Stickers
  • mac OS Tips
  • About Me
    • Privacy Policy
    • Cookie policy
Cybersylum

Cybersylum

  • Home
  • VMware QuickDocs
    • VMware General
    • vSphere
    • Skyline Advisor
    • vSAN
    • Horizon
    • NSX
    • vRealize Suite
      • vRealize Operations
      • vRealize Orchestrator
    • VMware Cloud Services
    • Podcasts
  • Home Lab
  • VMware Stickers
  • mac OS Tips
  • About Me
    • Privacy Policy
    • Cookie policy

Home Lab

My home lab helps me learn the technology I am assisting customers with.  I use it to:

  • Test updates and configurations
  • Walk through steps & screens with customers
  • Write blog posts
  • Experiment and learn  – a place where I can break things without impacting anyone.

So far the family CFO has been very understanding of the money and time I have invested.  I try to be a good steward of that credit!

Current Home Lab ( Mark V – November 2022 )

This version of my lab has a Management node and Compute cluster.

Management Node

MacBook Pro Retina (16gb RAM, 1 TB SSD) – VMware Fusion

    • Used to run DNS/DHCP/Mail Relay services
    • Using this device helps isolate the most critical services from the lab – ensuring continued services while the lab undergoes change.

Primary Compute Cluster

This is a 4-node cluster running vSphere 7

3 nodes are:

    • ASUS Prime H570-PLUS motherboard
    • Intel Core I-5 10400  CPU
    • 128 GB RAM
    • HP NC382T dual-port NIC
    • Intel 520x 10gb NIC

4th node – HP z620 Workstation

    • Intel Xeon CPU E5-2620
    • 96 GB RAM
    • HP NC382T dual-port NIC
    • Intel 520x 10gb NIC

Secondary Compute Cluster

I have repurposed one of the older hosts from a prior lab iteration to create a single platform that can support 2 nested ESXi hosts. This is primarily to allow me to experiment with Site Recovery Manager and vRealize Automation blueprints that deploy to multiple clusters.

Networking

  • Core Switch – Dell PowerConnect 6248
  • 10gb switch – MicroTIK CRS309-1G-8S+IN for vMotion and possibly vSAN in the future
  • Firewall – Ubiquiti UDM SE
  • Wifi – Ubiquiti AP-AC-LR

Storage

Synology DS1512+

    • iSCSI and NFS
    • Seagate IronWolf – 4TB (x3) – capacity
    • Samsung 128GB SSD (x2) – read/write cache

Synology DS920+

    • iSCSI
    • 2.6 TB Flash

Workloads

  • vRealize Automation
  • vRealize Identity Manager
  • VMware Horizon
  • vRealize Operations Manager
  • vRealize Log Insight
  • Plex

Other Applications

  • Veeam Backup & Replication Community Edition – Veeam continues to be an excellent product!  I am very thankful to Veeam for keeping a free edition of their product for vExperts and IT Pros to learn with.
  • Nagios Core – This is running on a Raspberry PI. My time is limited, so I only use it for basic up/down notifications of lab components.   It can certainly do much more!

Notes

  • Using the Macbook Pro as a management node may seem like an unconventional choice.  It is an older Macbook but has enough power to run the bare necessities.  It has a built-in UPS, and KVM  as well as easy remote access.  Most importantly – it is very easy for my family to power it back on after a power outage if I am away!
  • I have used whitebox-based ESX hosts for a long time.  It does require some research and careful planning using the HCL for the various components.  The upside for me has been – the lab is quiet, saves power, and was easier to adapt to requirement updates.  One of the changes with vSphere 7 is the evolution of Update Manger to Lifecycle Manager.    Lifecycle Manager has become aware of the host make & model.  It uses this capability to check the HCL as part of the remediation process. This has great potential to identify upgrade problems before a customer implements the upgrade.  This may eventually  impact vSphere Homelabs using whitebox-based ESXi hosts; but the capabilities will be a great benefit to production environments.

Home Lab .NEXT

The memory capacity of the cluster is close to 480gb, I finally have enough compute to run the full VMware stack at a moderate activity level and still have some headroom.    Some items on my planning list:

  • Replacing the z620 workstation with a matching host in the Compute Cluster.    The CPU in this system is reaching the end of the support line.  While I believe I will be able to get it to work with vSphere 8, I will eventually need to replace it.
  • Replace the Dell PowerConnect Core switch.  I am quite firmly entrenched in the Ubiquiti ecosystem.  Their products work quite well for what I need and provide enough “nerd nobs” to keep me occupied.  If I can find the budget, it would be nice to replace this aging switch with a model that is managed in the same system as the other Ubiquiti products.
  • vSAN – With the 10gb MicroTIK switch, I can implement an all-flash vSAN datastore.  I will use the on-board NVME slot on 3 hosts and add a PCI-E NVME card for the 4th.
  • Public Cloud integration with VMware solutions and Azure or AWS.
  • I have resisted going the route of used Enterprise gear.  This is mostly due to the noise/power factor.  However if I have to rebuild my white-boxes significantly it may be more cost effective to go that route at some point (A single large host that I can use to nest environments may be a good option)

Previous Generations

Mark I  – January 2010

This was a single host (core i-7 and 12 gb of ram).  This was my first lab, built a few years after I started working with VMware products.  It served me quite well for a long time.

Mark II – May 2015

  • 3 ESX hosts
    • core i7 from previous generation with ram increased to 24gb
    • 2  – HP DL 380’s with 32gb RAM each

Mark III – March 2017

  • 3-node Compute cluster (whitebox nodes, each has 32gb RAM, core I5 CPU, and hybrid vSAN)
  • Single-node Management cluster (MacBook Pro Retina running Fusion)
  • Synology DS1512+ (12TB raw with R/W cache)
  • Networking – Dell PowerConnect 6428, Ubiquiti Wifi APs and Edge Gateway

Mark IIIa – April 2020

  • 4-node Compute Cluster (3 whitebox nodes with 32gb RAM and Core i5 CPU, 1 HP z620 Workstation – 96gb RAM)
  • Single-node Management cluster (MacBook Pro Retina running Fusion
  • Synology DS1512+ (12TB raw with R/W cache)
  • Networking – Dell PowerConnect 6428, Ubiquiti Wifi APs and Edge Gateway

Mark IV – June 2021

  • 4-node Compute Cluster (3 whitebox nodes with 64gb RAM and Core i5 CPU, 1 HP z620 Workstation – 96gb RAM)
  • Single-node Management cluster (MacBook Pro Retina running Fusion
  • 2 Synology Storage arrays
    • 1512+ – 12 TB spinning disk with R/W cache
    • 920+ – 2.6 TB Flash
  • Networking – Dell PowerConnect 6428, Ubiquiti Wifi APs and Edge Gateway

 

 

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)

Search

Disclaimer

The content and opinions on this site belong to me - not my employer.

You are welcome to use any ideas or code from this site. You have the responsiblity to test these before using in a production environment.

Upcoming Events

  • Mon
    21
    Aug
    2023
    Thu
    24
    Aug
    2023

    VMware Explore 2023 - US

    The dates for THE big virtualization conference has been announced and VMware Explore is back in Las Vegas for 2023!

Categories

Before I Forget Certificates Education Home Lab Horizon View MacOS Networking PowerCLI Professional Development Scripting TechBITS Update Manager VCSA VMUG VMware VMware Cloud on AWS VMware Portal VMware Tools VMworld vSphere vToolBelt Windows 10

Archives

Category

Before I Forget Certificates Education Home Lab Horizon View MacOS Networking PowerCLI Professional Development Scripting TechBITS Update Manager VCSA VMUG VMware VMware Cloud on AWS VMware Portal VMware Tools VMworld vSphere vToolBelt Windows 10

Twitter: Follow Me

My Tweets
Proudly powered by WordPress | Theme: Showme by NEThemes.