VMware Home Lab on Intel NUC 9

Technology keeps moving but this post has not.

What you're about to read hasn't been updated in more than a year. The information may be out of date. Let me know if you see anything that needs fixing.

I picked up an Intel NUC 9 Extreme kit a few months back (thanks, VMware!) and have been slowly tinkering with turning it into an extremely capable self-contained home lab environment. I'm pretty happy with where things sit right now so figured it was about time to start documenting and sharing what I've done.

But boy would I love some more RAM


(Caution: here be affiliate links)

The NUC runs ESXi 7.0u1 and currently hosts the following:

I'm leveraging my $200 vMUG Advantage subscription to provide 365-day licenses for all the VMware bits (particularly vRA, which doesn't come with a built-in evaluation option).

Basic Infrastructure

Setting up the NUC

The NUC connects to my home network through its onboard gigabit Ethernet interface (vmnic0). (The NUC does have a built-in WiFi adapter but for some reason VMware hasn't yet allowed their hypervisor to connect over WiFi - weird, right?) I wanted to use a small 8GB thumbdrive as the host's boot device so I installed that in one of the NUC's internal USB ports. For the purpose of installation, I connected a keyboard and monitor to the NUC, and I configured the BIOS to automatically boot up when power is restored after a power failure.

I used the Chromebook Recovery Utility to write the ESXi installer ISO to another USB drive (how-to here), inserted that bootable drive to a port on the front of the NUC, and booted the NUC from the drive. Installing ESXi 7.0u1 was as easy as it could possibly be. All hardware was automatically detected and the appropriate drivers loaded. Once the host booted up, I used the DCUI to configure a static IP address ( I then shut down the NUC, disconnected the keyboard and monitor, and moved it into the cabinet where it will live out its headless existence.

I was then able to point my web browser to to log in to the host and get down to business. First stop: networking. For now, I only need a single standard switch (vSwitch0) with two portgroups: one for the host's vmkernel interface, and the other for the VMs (including the nested ESXi appliances) that are going to run directly on this physical host. The one "gotcha" when working with a nested environment is that you'll need to edit the virtual switch's security settings to "Allow promiscuous mode" and "Allow forged transmits" (for reasons described here). Allowing promiscuous mode and forged transmits

I created a single datastore to span the entirety of that 1TB NVMe drive. The nested ESXi hosts will use VMDKs stored here to provide storage to the nested VMs. The new datastore

Domain Controller

I created a new Windows VM with 2 vCPUs, 4GB of RAM, and a 90GB virtual hard drive, and I booted it off a Server 2019 evaluation ISO. I gave it a name, a static IP address, and proceeded to install and configure the Active Directory Domain Services and DNS Server roles. I created static A and PTR records for the vCenter Server Appliance I'd be deploying next (vcsa.) and the physical host (nuchost.). I configured ESXi to use this new server for DNS resolutions, and confirmed that I could resolve the VCSA's name from the host.

AD and DNS

Before moving on, I installed the Chrome browser on this new Windows VM and also set up remote access via Chrome Remote Desktop. This will let me remotely access and manage my lab environment without having to punch holes in the router firewall (or worry about securing said holes). And it's got "chrome" in the name so it will work just fine from my Chromebooks!


I attached the vCSA installation ISO to the Windows VM and performed the vCenter deployment from there. (See, I told you that Chrome Remote Desktop would come in handy!) vCenter deployment process

After the vCenter was deployed and the basic configuration completed, I created a new cluster to contain the physical host. There's likely only ever going to be the one physical host but I like being able to logically group hosts in this way, particularly when working with PowerCLI. I then added the host to the vCenter by its shiny new FQDN. Shiny new cluser

I've now got a fully-functioning VMware lab, complete with a physical hypervisor to run the workloads, a vCenter server to manage the workloads, and a Windows DNS server to tell the workloads how to talk to each other. Since the goal is to ultimately simulate a (small) production environment, let's set up some additional networking before we add anything else.



My home network uses the generic address space, with internet router providing DHCP addresses in the range .100-.250. I'm using the range for statically-configured IPs, particularly those within my lab environment. Here are the addresses being used by the lab so far:

IP Address Hostname Purpose Gateway win01 AD DC, DNS nuchost Physical ESXi host vcsa vCenter Server

Of course, not everything that I'm going to deploy in the lab will need to be accessible from outside the lab environment. This goes for obvious things like the vMotion and vSAN networks of the nested ESXi hosts, but it will also be useful to have internal networks that can be used by VMs provisioned by vRA. So I'll be creating these networks:

VLAN ID Network Purpose
1610 Management
1620 Servers-1
1630 Servers-2
1698 vSAN
1699 vMotion


I'll start by adding a second vSwitch to the physical host. It doesn't need a physical adapter assigned since this switch will be for internal traffic. I create two port groups: one tagged for the VLAN 1610 Management traffic, which will be useful for attaching VMs on the physical host to the internal network; and the second will use VLAN 4095 to pass all VLAN traffic to the nested ESXi hosts. And again, this vSwitch needs to have its security policy set to allow Promiscuous Mode and Forged Transmits. I also set the vSwitch to support an MTU of 9000 so I can use Jumbo Frames on the vMotion and vSAN networks.

Second vSwitch


Wouldn't it be great if the VMs that are going to be deployed on those 1610, 1620, and 1630 VLANs could still have their traffic routed out of the internal networks? But doing routing requires a router (or so my network friends tell me)... so I deployed a VM running the open-source VyOS router platform. I used William Lam's instructions for installing VyOS, making sure to attach the first network interface to the Home-Network portgroup and the second to the Isolated portgroup (VLAN 4095). I then set to work configuring the router.

After logging in to the VM, I entered the router's configuration mode:

1vyos@vyos:~$ configure

I then started with setting up the interfaces - eth0 for the network, eth1 on the trunked portgroup, and a number of VIFs on eth1 to handle the individual VLANs I'm interested in using.

 1set interfaces ethernet eth0 address ''
 2set interfaces ethernet eth0 description 'Outside'
 3set interfaces ethernet eth1 mtu '9000'
 4set interfaces ethernet eth1 vif 1610 address ''
 5set interfaces ethernet eth1 vif 1610 description 'VLAN 1610 for Management'
 6set interfaces ethernet eth1 vif 1610 mtu '1500'
 7set interfaces ethernet eth1 vif 1620 address ''
 8set interfaces ethernet eth1 vif 1620 description 'VLAN 1620 for Servers-1'
 9set interfaces ethernet eth1 vif 1620 mtu '1500'
10set interfaces ethernet eth1 vif 1630 address ''
11set interfaces ethernet eth1 vif 1630 description 'VLAN 1630 for Servers-2'
12set interfaces ethernet eth1 vif 1630 mtu '1500'
13set interfaces ethernet eth1 vif 1698 description 'VLAN 1698 for vSAN'
14set interfaces ethernet eth1 vif 1698 mtu '9000'
15set interfaces ethernet eth1 vif 1699 description 'VLAN 1699 for vMotion'
16set interfaces ethernet eth1 vif 1699 mtu '9000'

I also set up NAT for the networks that should be routable:

 1set nat source rule 10 outbound-interface 'eth0'
 2set nat source rule 10 source address ''
 3set nat source rule 10 translation address 'masquerade'
 4set nat source rule 20 outbound-interface 'eth0'
 5set nat source rule 20 source address ''
 6set nat source rule 20 translation address 'masquerade'
 7set nat source rule 30 outbound-interface 'eth0'
 8set nat source rule 30 source address ''
 9set nat source rule 30 translation address 'masquerade'
10set nat source rule 100 outbound-interface 'eth0'
11set nat source rule 100 translation address 'masquerade'
12set protocols static route next-hop

And I configured DNS forwarding:

1set service dns forwarding allow-from ''
2set service dns forwarding domain 10.16.172.in-addr.arpa. server ''
3set service dns forwarding domain 20.16.172.in-addr.arpa. server ''
4set service dns forwarding domain 30.16.172.in-addr.arpa. server ''
5set service dns forwarding domain lab.bowdre.net server ''
6set service dns forwarding listen-address ''
7set service dns forwarding listen-address ''
8set service dns forwarding listen-address ''
9set service dns forwarding name-server ''

Finally, I also configured VyOS's DHCP server so that I won't have to statically configure the networking for VMs deployed from vRA:

 1set service dhcp-server shared-network-name SCOPE_10_MGMT authoritative
 2set service dhcp-server shared-network-name SCOPE_10_MGMT subnet default-router ''
 3set service dhcp-server shared-network-name SCOPE_10_MGMT subnet dns-server ''
 4set service dhcp-server shared-network-name SCOPE_10_MGMT subnet domain-name 'lab.bowdre.net'
 5set service dhcp-server shared-network-name SCOPE_10_MGMT subnet lease '86400'
 6set service dhcp-server shared-network-name SCOPE_10_MGMT subnet range 0 start ''
 7set service dhcp-server shared-network-name SCOPE_10_MGMT subnet range 0 stop ''
 8set service dhcp-server shared-network-name SCOPE_20_SERVERS authoritative
 9set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet default-router ''
10set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet dns-server ''
11set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet domain-name 'lab.bowdre.net'
12set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet lease '86400'
13set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet range 0 start ''
14set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet range 0 stop ''
15set service dhcp-server shared-network-name SCOPE_30_SERVERS authoritative
16set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet default-router ''
17set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet dns-server ''
18set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet domain-name 'lab.bowdre.net'
19set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet lease '86400'
20set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet range 0 start ''
21set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet range 0 stop ''

Satisfied with my work, I ran the commit and save commands. BOOM, this server jockey just configured a router!

Nested vSAN Cluster

Alright, it's time to start building up the nested environment. To start, I grabbed the latest Nested ESXi Virtual Appliance .ova, courtesy of William Lam. I went ahead and created DNS records for the hosts I'd be deploying, and I mapped out what IPs would be used on each VLAN:

Hostname 1610-Management 1698-vSAN 1699-vMotion

Deploying the virtual appliances is just like any other "Deploy OVF Template" action. I placed the VMs on the physical-cluster compute resource, and selected to thin provision the VMDKs on the local datastore. I chose the "Isolated" VM network which uses VLAN 4095 to make all the internal VLANs available on a single portgroup.

Deploying the nested ESXi OVF

And I set the networking properties accordingly:

OVF networking settings

These virtual appliances come with 3 hard drives. The first will be used as the boot device, the second for vSAN caching, and the third for vSAN capacity. I doubled the size of the second and third drives, to 8GB and 16GB respectively:

OVF storage configuration

After booting the new host VMs, I created a new cluster in vCenter and then added the nested hosts: New nested hosts added to a cluster

Next, I created a new Distributed Virtual Switch to break out the VLAN trunk on the nested host "physical" adapters into the individual VLANs I created on the VyOS router. Again, each port group will need to allow Promiscuous Mode and Forged Transmits, and I set the dvSwitch MTU size to 9000 (to support Jumbo Frames on the vSAN and vMotion portgroups). New dvSwitch for nested traffic

I migrated the physical NICs and vmk0 to the new dvSwitch, and then created new vmkernel interfaces for vMotion and vSAN traffic on each of the nested hosts: ESXi vmkernel interfaces

I then ssh'd into the hosts and used vmkping to make sure they could talk to each other over these interfaces. I changed the vMotion interface to use the vMotion TCP/IP stack so needed to append the -S vmotion flag to the command:

 1[root@esxi01:~] vmkping -I vmk1
 2PING ( 56 data bytes
 364 bytes from icmp_seq=0 ttl=64 time=0.243 ms
 464 bytes from icmp_seq=1 ttl=64 time=0.260 ms
 564 bytes from icmp_seq=2 ttl=64 time=0.262 ms
 7--- ping statistics ---
 83 packets transmitted, 3 packets received, 0% packet loss
 9round-trip min/avg/max = 0.243/0.255/0.262 ms
11[root@esxi01:~] vmkping -I vmk2 -S vmotion
12PING ( 56 data bytes
1364 bytes from icmp_seq=0 ttl=64 time=0.202 ms
1464 bytes from icmp_seq=1 ttl=64 time=0.312 ms
1564 bytes from icmp_seq=2 ttl=64 time=0.242 ms
17--- ping statistics ---
183 packets transmitted, 3 packets received, 0% packet loss
19round-trip min/avg/max = 0.202/0.252/0.312 ms

Okay, time to throw some vSAN on these hosts. Select the cluster object, go to the configuration tab, scroll down to vSAN, and click "Turn on vSAN". This will be a single site cluster, and I don't need to enable any additional services. When prompted, I claim the 8GB drives for the cache tier and the 16GB drives for capacity. Configuring vSAN

It'll take a few minutes for vSAN to get configured on the cluster. vSAN capacity is.... not much, but it's a start

Huzzah! Next stop:

vRealize Automation 8.2

The vRealize Easy Installer makes it, well, easy to install vRealize Automation (and vRealize Orchestrator, on the same appliance) and its prerequisites, vRealize Suite Lifecycle Manager (LCM) and Workspace ONE Access (formerly VMware Identity Manager) - provided that you've got enough resources. The vRA virtual appliance deploys with a whopping 40GB of memory allocated to it. Post-deployment, I found that I was able to trim that down to 30GB without seeming to break anything, but going much lower than that would result in services failing to start.

Anyhoo, each of these VMs will need to be resolvable in DNS so I started by creating some A records:


I then attached the installer ISO to my Windows VM and ran through the installation from there. vRealize Easy Installer

Similar to the vCenter deployment process, this one prompts you for all the information it needs up front and then takes care of everything from there. That's great news because this is a pretty long deployment; it took probably two hours from clicking the final "Okay, do it" button to being able to log in to my shiny new vRealize Automation environment.


So that's a glimpse into how I built my nested ESXi lab - all for the purpose of being able to develop and test vRealize Automation templates and vRealize Orchestrator workflows in a semi-realistic environment. I've used this setup to write a vRA integration for using phpIPAM to assign static IP addresses to deployed VMs. I wrote a complicated vRO workflow for generating unique hostnames which fit a corporate naming standard and don't conflict with any other names in vCenter, Active Directory, or DNS. I also developed a workflow for (optionally) creating AD objects under appropriate OUs based on properties generated on the cloud template; VMware just announced similar functionality with vRA 8.3 and, honestly, my approach works much better for my needs anyway. And, most recently, I put the finishing touches on a solution for (optionally) creating static records in a Microsoft DNS server from vRO.

I'll post more about all that work soon but this post has already gone on long enough. Stay tuned!