I've been leveraging the open-source Tanzu Community Edition Kubernetes distribution for a little while now, both in my home lab and at work, so I was disappointed to learn that VMware was abandoning the project. TCE had been a pretty good fit for my needs, and now I needed to search for a replacement. VMware is offering a free version of Tanzu Kubernetes Grid as a replacement, but it comes with a license solely for non-commercial use so I wouldn't be able to use it at work. And I'd really like to use the same solution in both environments to make development and testing easier on me.
There are a bunch of great projects for running Kubernetes in development/lab environments, and others optimized for much larger enterprise environments, but I struggled to find a product that felt like a good fit for both in the way TCE was. My workloads are few and pretty simple so most enterprise K8s variants (Tanzu included) would feel like overkill, but I do need to ensure everything remains highly-available in the data centers at work.
Plus, I thought it would be a fun learning experience to roll my own Kubernetes on vSphere!
In the next couple of posts, I'll share the details of how I'm using Terraform to provision production-ready vanilla Kubernetes clusters on vSphere (complete with the vSphere Container Storage Interface plugin!) in a consistent and repeatable way. I also plan to document one of the ways I'm leveraging these clusters, which is using them as a part of a Gitlab CI/CD pipeline to churn out weekly VM template builds so I never again have to worry about my templates being out of date.
I have definitely learned a ton in the process (and still have a lot more to learn), but today I'll start by describing how I'm leveraging Packer to create a single VM template ready to enter service as a Kubernetes compute node.
What's Packer, and why?
HashiCorp Packer is a free open-source tool designed to create consistent, repeatable machine images. It's pretty killer as a part of a CI/CD pipeline to kick off new builds based on a schedule or code commits, but also works great for creating builds on-demand. Packer uses the HashiCorp Configuration Language (HCL) to describe all of the properties of a VM build in a concise and readable format.
You might ask why I would bother with using a powerful tool like Packer if I'm just going to be building a single template. Surely I could just do that by hand, right? And of course, you'd be right - but using an Infrastructure as Code tool even for one-off builds has some pretty big advantages.
- It's fast. Packer is able to build a complete VM (including pulling in all available OS and software updates) in just a few minutes, much faster than I could click through an installer on my own.
- It's consistent. Packer will follow the exact same steps for every build, removing the small variations (and typos!) that would surely show up if I did the builds manually.
- It's great for testing changes. Since Packer builds are so fast and consistent, it makes it incredibly easy to test changes as I go. I can be confident that the only changes between two builds will be the changes I deliberately introduced.
- It's self-documenting. The entire VM (and its guest OS) is described completely within the Packer HCL file(s), which I can review to remember which packages were installed, which user account(s) were created, what partition scheme was used, and anything else I might need to know.
- It supports change tracking. A Packer build is just a set of HCL files so it's easy to sync them with a version control system like Git to track (and revert) changes as needed.
Packer is also extremely versatile, and a broad set of external plugins expand its capabilities to support creating machines for basically any environment. For my needs, I'll be utilizing the vsphere-iso builder which uses the vSphere API to remotely build VMs directly on the hypervisors.
Sounds pretty cool, right? I'm not going to go too deep into "how to Packer" in this post, but HashiCorp does provide some pretty good tutorials to help you get started.
Prerequisites
Install Packer
Before being able to use Packer, you have to install it. On Debian/Ubuntu Linux, this process consists of adding the HashiCorp GPG key and software repository, and then simply installing the package:
1curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
2sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
3sudo apt-get update && sudo apt-get install packer
You can learn how to install Packer on other systems by following this tutorial from HashiCorp.
Configure privileges
Packer will need a user account with sufficient privileges in the vSphere environment to be able to create and manage a VM. I'd recommend using an account dedicated to automation tasks, and assigning it the required privileges listed in the vsphere-iso
documentation.
Gather installation media
My Kubernetes node template will use Ubuntu 20.04 LTS as the OS so I'll go ahead and download the server installer ISO and upload it to a vSphere datastore to make it available to Packer.
Template build
After the OS is installed and minimimally configured, I'll need to add in Kubernetes components like containerd
, kubectl
, kubelet
, and kubeadm
, and then apply a few additional tweaks to get it fully ready.
You can see the entirety of my Packer configuration on GitHub, but I'll talk through each file as we go along.
File/folder layout
After quite a bit of experimentation, I've settled on a preferred way to organize my Packer build files. I've found that this structure makes the builds modular enough that it's easy to reuse components in other builds, but still consolidated enough to be easily manageable. This layout is, of course, largely subjective - it's just what works well for me:
1.
2├── certs
3│ ├── ca.cer
4├── data
5│ ├── meta-data
6│ └── user-data.pkrtpl.hcl
7├── scripts
8│ ├── cleanup-cloud-init.sh
9│ ├── cleanup-subiquity.sh
10│ ├── configure-sshd.sh
11│ ├── disable-multipathd.sh
12│ ├── disable-release-upgrade-motd.sh
13│ ├── enable-vmware-customization.sh
14│ ├── generalize.sh
15│ ├── install-ca-certs.sh
16│ ├── install-k8s.sh
17│ ├── persist-cloud-init-net.sh
18│ ├── update-packages.sh
19│ ├── wait-for-cloud-init.sh
20│ └── zero-disk.sh
21├── ubuntu-k8s.auto.pkrvars.hcl
22├── ubuntu-k8s.pkr.hcl
23└── variables.pkr.hcl
- The
certs
folder holds the Base64-encoded PEM-formatted certificate of my internal Certificate Authority which will be automatically installed in the provisioned VM's trusted certificate store. - The
data
folder stores files for generating thecloud-init
configuration that will automate the OS installation and configuration. - The
scripts
directory holds a collection of scripts used for post-install configuration tasks. Sure, I could just use a single large script, but using a bunch of smaller ones helps keep things modular and easy to reuse elsewhere. variables.pkr.hcl
declares all of the variables which will be used in the Packer build, and sets the default values for some of them.ubuntu-k8s.auto.pkrvars.hcl
assigns values to those variables. This is where most of the user-facing options will be configured, such as usernames, passwords, and environment settings.ubuntu-k8s.pkr.hcl
is where the build process is actually described.
Let's quickly run through that build process, and then I'll back up and examine some other components in detail.
ubuntu-k8s.pkr.hcl
packer
block
The first block in the file tells Packer about the minimum version requirements for Packer as well as the external plugins used for the build:
1// BLOCK: packer
2// The Packer configuration.
3packer {
4 required_version = ">= 1.8.2"
5 required_plugins {
6 vsphere = {
7 version = ">= 1.0.8"
8 source = "github.com/hashicorp/vsphere"
9 }
10 sshkey = {
11 version = ">= 1.0.3"
12 source = "github.com/ivoronin/sshkey"
13 }
14 }
15}
As I mentioned above, I'll be using the official vsphere
plugin to handle the provisioning on my vSphere environment. I'll also make use of the sshkey
plugin to dynamically generate SSH keys for the build process.
data
block
This section would be used for loading information from various data sources, but I'm only using it for the sshkey
plugin (as mentioned above).
1// BLOCK: data
2// Defines data sources.
3data "sshkey" "install" {
4 type = "ed25519"
5 name = "packer_key"
6}
This will generate an ECDSA keypair, and the public key will include the identifier packer_key
to make it easier to manage later on. Using this plugin to generate keys means that I don't have to worry about storing a private key somewhere in the build directory.
locals
block
Locals are a type of Packer variable which aren't explicitly declared in the variables.pkr.hcl
file. They only exist within the context of a single build (hence the "local" name). Typical Packer variables are static and don't support string manipulation; locals, however, do support expressions that can be used to change their value on the fly. This makes them very useful when you need to combine variables into a single string or concatenate lists of SSH public keys (such as in the highlighted lines):
1// BLOCK: locals
2// Defines local variables.
3locals {
4 ssh_public_key = data.sshkey.install.public_key
5 ssh_private_key_file = data.sshkey.install.private_key_path
6 build_tool = "HashiCorp Packer ${packer.version}"
7 build_date = formatdate("YYYY-MM-DD hh:mm ZZZ", timestamp())
8 build_description = "Kubernetes Ubuntu 20.04 Node template\nBuild date: ${local.build_date}\nBuild tool: ${local.build_tool}"
9 iso_paths = ["[${var.common_iso_datastore}] ${var.iso_path}/${var.iso_file}"]
10 iso_checksum = "${var.iso_checksum_type}:${var.iso_checksum_value}"
11 data_source_content = {
12 "/meta-data" = file("data/meta-data")
13 "/user-data" = templatefile("data/user-data.pkrtpl.hcl", {
14 build_username = var.build_username
15 build_password = bcrypt(var.build_password)
16 ssh_keys = concat([local.ssh_public_key], var.ssh_keys)
17 vm_guest_os_language = var.vm_guest_os_language
18 vm_guest_os_keyboard = var.vm_guest_os_keyboard
19 vm_guest_os_timezone = var.vm_guest_os_timezone
20 vm_guest_os_hostname = var.vm_name
21 apt_mirror = var.cloud_init_apt_mirror
22 apt_packages = var.cloud_init_apt_packages
23 })
24 }
25}
This block also makes use of the built-in templatefile()
function to insert build-specific variables into the user-data
file for cloud-init
(more on that in a bit).
source
block
The source
block tells the vsphere-iso
builder how to connect to vSphere, what hardware specs to set on the VM, and what to do with the VM once the build has finished (convert it to template, export it to OVF, and so on).
You'll notice that most of this is just mapping user-defined variables (with the var.
prefix) to properties used by vsphere-iso
:
1// BLOCK: source
2// Defines the builder configuration blocks.
3source "vsphere-iso" "ubuntu-k8s" {
4
5 // vCenter Server Endpoint Settings and Credentials
6 vcenter_server = var.vsphere_endpoint
7 username = var.vsphere_username
8 password = var.vsphere_password
9 insecure_connection = var.vsphere_insecure_connection
10
11 // vSphere Settings
12 datacenter = var.vsphere_datacenter
13 cluster = var.vsphere_cluster
14 datastore = var.vsphere_datastore
15 folder = var.vsphere_folder
16
17 // Virtual Machine Settings
18 vm_name = var.vm_name
19 vm_version = var.common_vm_version
20 guest_os_type = var.vm_guest_os_type
21 firmware = var.vm_firmware
22 CPUs = var.vm_cpu_count
23 cpu_cores = var.vm_cpu_cores
24 CPU_hot_plug = var.vm_cpu_hot_add
25 RAM = var.vm_mem_size
26 RAM_hot_plug = var.vm_mem_hot_add
27 cdrom_type = var.vm_cdrom_type
28 remove_cdrom = var.common_remove_cdrom
29 disk_controller_type = var.vm_disk_controller_type
30 storage {
31 disk_size = var.vm_disk_size
32 disk_thin_provisioned = var.vm_disk_thin_provisioned
33 }
34 network_adapters {
35 network = var.vsphere_network
36 network_card = var.vm_network_card
37 }
38 tools_upgrade_policy = var.common_tools_upgrade_policy
39 notes = local.build_description
40 configuration_parameters = {
41 "devices.hotplug" = "FALSE"
42 }
43
44 // Removable Media Settings
45 iso_url = var.iso_url
46 iso_paths = local.iso_paths
47 iso_checksum = local.iso_checksum
48 cd_content = local.data_source_content
49 cd_label = var.cd_label
50
51 // Boot and Provisioning Settings
52 boot_order = var.vm_boot_order
53 boot_wait = var.vm_boot_wait
54 boot_command = var.vm_boot_command
55 ip_wait_timeout = var.common_ip_wait_timeout
56 shutdown_command = var.vm_shutdown_command
57 shutdown_timeout = var.common_shutdown_timeout
58
59 // Communicator Settings and Credentials
60 communicator = "ssh"
61 ssh_username = var.build_username
62 ssh_private_key_file = local.ssh_private_key_file
63 ssh_clear_authorized_keys = var.build_remove_keys
64 ssh_port = var.communicator_port
65 ssh_timeout = var.communicator_timeout
66
67 // Snapshot Settings
68 create_snapshot = var.common_snapshot_creation
69 snapshot_name = var.common_snapshot_name
70
71 // Template and Content Library Settings
72 convert_to_template = var.common_template_conversion
73 dynamic "content_library_destination" {
74 for_each = var.common_content_library_name != null ? [1] : []
75 content {
76 library = var.common_content_library_name
77 description = local.build_description
78 ovf = var.common_content_library_ovf
79 destroy = var.common_content_library_destroy
80 skip_import = var.common_content_library_skip_export
81 }
82 }
83
84 // OVF Export Settings
85 dynamic "export" {
86 for_each = var.common_ovf_export_enabled == true ? [1] : []
87 content {
88 name = var.vm_name
89 force = var.common_ovf_export_overwrite
90 options = [
91 "extraconfig"
92 ]
93 output_directory = "${var.common_ovf_export_path}/${var.vm_name}"
94 }
95 }
96}
build
block
This block brings everything together and executes the build. It calls the source.vsphere-iso.ubuntu-k8s
block defined above, and also ties in a file
and a few shell
provisioners. file
provisioners are used to copy files (like SSL CA certificates) into the VM, while the shell
provisioners run commands and execute scripts. Those will be handy for the post-deployment configuration tasks, like updating and installing packages.
1// BLOCK: build
2// Defines the builders to run, provisioners, and post-processors.
3build {
4 sources = [
5 "source.vsphere-iso.ubuntu-k8s"
6 ]
7
8 provisioner "file" {
9 source = "certs"
10 destination = "/tmp"
11 }
12
13 provisioner "shell" {
14 execute_command = "export KUBEVERSION=${var.k8s_version}; bash {{ .Path }}"
15 expect_disconnect = true
16 environment_vars = [
17 "KUBEVERSION=${var.k8s_version}"
18 ]
19 scripts = var.post_install_scripts
20 }
21
22 provisioner "shell" {
23 execute_command = "bash {{ .Path }}"
24 expect_disconnect = true
25 scripts = var.pre_final_scripts
26 }
27}
So you can see that the ubuntu-k8s.pkr.hcl
file primarily focuses on the structure and form of the build, and it's written in such a way that it can be fairly easily adapted for building other types of VMs. Very few things in this file would have to be changed since so many of the properties are derived from the variables.
You can view the full file here.
variables.pkr.hcl
Before looking at the build-specific variable definitions, let's take a quick look at the variables I've told Packer that I intend to use. After all, Packer requires that variables be declared before they can be used.
Most of these carry descriptions with them so I won't restate them outside of the code block here:
1/*
2 DESCRIPTION:
3 Ubuntu Server 20.04 LTS variables using the Packer Builder for VMware vSphere (vsphere-iso).
4*/
5
6// BLOCK: variable
7// Defines the input variables.
8
9// vSphere Credentials
10variable "vsphere_endpoint" {
11 type = string
12 description = "The fully qualified domain name or IP address of the vCenter Server instance. ('vcenter.lab.local')"
13}
14
15variable "vsphere_username" {
16 type = string
17 description = "The username to login to the vCenter Server instance. ('packer')"
18 sensitive = true
19}
20
21variable "vsphere_password" {
22 type = string
23 description = "The password for the login to the vCenter Server instance."
24 sensitive = true
25}
26
27variable "vsphere_insecure_connection" {
28 type = bool
29 description = "Do not validate vCenter Server TLS certificate."
30 default = true
31}
32
33// vSphere Settings
34variable "vsphere_datacenter" {
35 type = string
36 description = "The name of the target vSphere datacenter. ('Lab Datacenter')"
37}
38
39variable "vsphere_cluster" {
40 type = string
41 description = "The name of the target vSphere cluster. ('cluster-01')"
42}
43
44variable "vsphere_datastore" {
45 type = string
46 description = "The name of the target vSphere datastore. ('datastore-01')"
47}
48
49variable "vsphere_network" {
50 type = string
51 description = "The name of the target vSphere network. ('network-192.168.1.0')"
52}
53
54variable "vsphere_folder" {
55 type = string
56 description = "The name of the target vSphere folder. ('_Templates')"
57}
58
59// Virtual Machine Settings
60variable "vm_name" {
61 type = string
62 description = "Name of the new VM to create."
63}
64
65variable "vm_guest_os_language" {
66 type = string
67 description = "The guest operating system lanugage."
68 default = "en_US"
69}
70
71variable "vm_guest_os_keyboard" {
72 type = string
73 description = "The guest operating system keyboard input."
74 default = "us"
75}
76
77variable "vm_guest_os_timezone" {
78 type = string
79 description = "The guest operating system timezone."
80 default = "UTC"
81}
82
83variable "vm_guest_os_type" {
84 type = string
85 description = "The guest operating system type. ('ubuntu64Guest')"
86}
87
88variable "vm_firmware" {
89 type = string
90 description = "The virtual machine firmware. ('efi-secure'. 'efi', or 'bios')"
91 default = "efi-secure"
92}
93
94variable "vm_cdrom_type" {
95 type = string
96 description = "The virtual machine CD-ROM type. ('sata', or 'ide')"
97 default = "sata"
98}
99
100variable "vm_cpu_count" {
101 type = number
102 description = "The number of virtual CPUs. ('2')"
103}
104
105variable "vm_cpu_cores" {
106 type = number
107 description = "The number of virtual CPUs cores per socket. ('1')"
108}
109
110variable "vm_cpu_hot_add" {
111 type = bool
112 description = "Enable hot add CPU."
113 default = true
114}
115
116variable "vm_mem_size" {
117 type = number
118 description = "The size for the virtual memory in MB. ('2048')"
119}
120
121variable "vm_mem_hot_add" {
122 type = bool
123 description = "Enable hot add memory."
124 default = true
125}
126
127variable "vm_disk_size" {
128 type = number
129 description = "The size for the virtual disk in MB. ('61440' = 60GB)"
130 default = 61440
131}
132
133variable "vm_disk_controller_type" {
134 type = list(string)
135 description = "The virtual disk controller types in sequence. ('pvscsi')"
136 default = ["pvscsi"]
137}
138
139variable "vm_disk_thin_provisioned" {
140 type = bool
141 description = "Thin provision the virtual disk."
142 default = true
143}
144
145variable "vm_disk_eagerly_scrub" {
146 type = bool
147 description = "Enable VMDK eager scrubbing for VM."
148 default = false
149}
150
151variable "vm_network_card" {
152 type = string
153 description = "The virtual network card type. ('vmxnet3' or 'e1000e')"
154 default = "vmxnet3"
155}
156
157variable "common_vm_version" {
158 type = number
159 description = "The vSphere virtual hardware version. (e.g. '19')"
160}
161
162variable "common_tools_upgrade_policy" {
163 type = bool
164 description = "Upgrade VMware Tools on reboot."
165 default = true
166}
167
168variable "common_remove_cdrom" {
169 type = bool
170 description = "Remove the virtual CD-ROM(s)."
171 default = true
172}
173
174// Template and Content Library Settings
175variable "common_template_conversion" {
176 type = bool
177 description = "Convert the virtual machine to template. Must be 'false' for content library."
178 default = false
179}
180
181variable "common_content_library_name" {
182 type = string
183 description = "The name of the target vSphere content library, if used. ('Lab-CL')"
184 default = null
185}
186
187variable "common_content_library_ovf" {
188 type = bool
189 description = "Export to content library as an OVF template."
190 default = false
191}
192
193variable "common_content_library_destroy" {
194 type = bool
195 description = "Delete the virtual machine after exporting to the content library."
196 default = true
197}
198
199variable "common_content_library_skip_export" {
200 type = bool
201 description = "Skip exporting the virtual machine to the content library. Option allows for testing / debugging without saving the machine image."
202 default = false
203}
204
205// Snapshot Settings
206variable "common_snapshot_creation" {
207 type = bool
208 description = "Create a snapshot for Linked Clones."
209 default = false
210}
211
212variable "common_snapshot_name" {
213 type = string
214 description = "Name of the snapshot to be created if create_snapshot is true."
215 default = "Created By Packer"
216}
217
218// OVF Export Settings
219variable "common_ovf_export_enabled" {
220 type = bool
221 description = "Enable OVF artifact export."
222 default = false
223}
224
225variable "common_ovf_export_overwrite" {
226 type = bool
227 description = "Overwrite existing OVF artifact."
228 default = true
229}
230
231variable "common_ovf_export_path" {
232 type = string
233 description = "Folder path for the OVF export."
234}
235
236// Removable Media Settings
237variable "common_iso_datastore" {
238 type = string
239 description = "The name of the source vSphere datastore for ISO images. ('datastore-iso-01')"
240}
241
242variable "iso_url" {
243 type = string
244 description = "The URL source of the ISO image. ('https://releases.ubuntu.com/20.04.5/ubuntu-20.04.5-live-server-amd64.iso')"
245}
246
247variable "iso_path" {
248 type = string
249 description = "The path on the source vSphere datastore for ISO image. ('ISOs/Linux')"
250}
251
252variable "iso_file" {
253 type = string
254 description = "The file name of the ISO image used by the vendor. ('ubuntu-20.04.5-live-server-amd64.iso')"
255}
256
257variable "iso_checksum_type" {
258 type = string
259 description = "The checksum algorithm used by the vendor. ('sha256')"
260}
261
262variable "iso_checksum_value" {
263 type = string
264 description = "The checksum value provided by the vendor."
265}
266
267variable "cd_label" {
268 type = string
269 description = "CD Label"
270 default = "cidata"
271}
272
273// Boot Settings
274variable "vm_boot_order" {
275 type = string
276 description = "The boot order for virtual machines devices. ('disk,cdrom')"
277 default = "disk,cdrom"
278}
279
280variable "vm_boot_wait" {
281 type = string
282 description = "The time to wait before boot."
283}
284
285variable "vm_boot_command" {
286 type = list(string)
287 description = "The virtual machine boot command."
288 default = []
289}
290
291variable "vm_shutdown_command" {
292 type = string
293 description = "Command(s) for guest operating system shutdown."
294 default = null
295}
296
297variable "common_ip_wait_timeout" {
298 type = string
299 description = "Time to wait for guest operating system IP address response."
300}
301
302variable "common_shutdown_timeout" {
303 type = string
304 description = "Time to wait for guest operating system shutdown."
305}
306
307// Communicator Settings and Credentials
308variable "build_username" {
309 type = string
310 description = "The username to login to the guest operating system. ('admin')"
311}
312
313variable "build_password" {
314 type = string
315 description = "The password to login to the guest operating system."
316 sensitive = true
317}
318
319variable "build_password_encrypted" {
320 type = string
321 description = "The encrypted password to login the guest operating system."
322 sensitive = true
323 default = null
324}
325
326variable "ssh_keys" {
327 type = list(string)
328 description = "List of public keys to be added to ~/.ssh/authorized_keys."
329 sensitive = true
330 default = []
331}
332
333variable "build_remove_keys" {
334 type = bool
335 description = "If true, Packer will attempt to remove its temporary key from ~/.ssh/authorized_keys and /root/.ssh/authorized_keys"
336 default = true
337}
338
339// Communicator Settings
340variable "communicator_port" {
341 type = string
342 description = "The port for the communicator protocol."
343}
344
345variable "communicator_timeout" {
346 type = string
347 description = "The timeout for the communicator protocol."
348}
349
350variable "communicator_insecure" {
351 type = bool
352 description = "If true, do not check server certificate chain and host name"
353 default = true
354}
355
356variable "communicator_ssl" {
357 type = bool
358 description = "If true, use SSL"
359 default = true
360}
361
362// Provisioner Settings
363variable "cloud_init_apt_packages" {
364 type = list(string)
365 description = "A list of apt packages to install during the subiquity cloud-init installer."
366 default = []
367}
368
369variable "cloud_init_apt_mirror" {
370 type = string
371 description = "Sets the default apt mirror during the subiquity cloud-init installer."
372 default = ""
373}
374
375variable "post_install_scripts" {
376 type = list(string)
377 description = "A list of scripts and their relative paths to transfer and run after OS install."
378 default = []
379}
380
381variable "pre_final_scripts" {
382 type = list(string)
383 description = "A list of scripts and their relative paths to transfer and run before finalization."
384 default = []
385}
386
387// Kubernetes Settings
388variable "k8s_version" {
389 type = string
390 description = "Kubernetes version to be installed. Latest stable is listed at https://dl.k8s.io/release/stable.txt"
391 default = "1.25.3"
392}
The full variables.pkr.hcl
can be viewed here.
ubuntu-k8s.auto.pkrvars.hcl
Packer automatically knows to load variables defined in files ending in *.auto.pkrvars.hcl
. Storing the variable values separately from the declarations in variables.pkr.hcl
makes it easier to protect sensitive values.
So I'll start by telling Packer what credentials to use for connecting to vSphere, and what vSphere resources to deploy to:
1/*
2 DESCRIPTION:
3 Ubuntu Server 20.04 LTS Kubernetes node variables used by the Packer Plugin for VMware vSphere (vsphere-iso).
4*/
5
6// vSphere Credentials
7vsphere_endpoint = "vcsa.lab.bowdre.net"
8vsphere_username = "packer"
9vsphere_password = "VMware1!"
10vsphere_insecure_connection = true
11
12// vSphere Settings
13vsphere_datacenter = "NUC Site"
14vsphere_cluster = "nuc-cluster"
15vsphere_datastore = "nuchost-local"
16vsphere_network = "MGT-Home 192.168.1.0"
17vsphere_folder = "_Templates"
I'll then describe the properties of the VM itself:
1// Guest Operating System Settings
2vm_guest_os_language = "en_US"
3vm_guest_os_keyboard = "us"
4vm_guest_os_timezone = "America/Chicago"
5vm_guest_os_type = "ubuntu64Guest"
6
7// Virtual Machine Hardware Settings
8vm_name = "k8s-u2004"
9vm_firmware = "efi-secure"
10vm_cdrom_type = "sata"
11vm_cpu_count = 2
12vm_cpu_cores = 1
13vm_cpu_hot_add = true
14vm_mem_size = 2048
15vm_mem_hot_add = true
16vm_disk_size = 30720
17vm_disk_controller_type = ["pvscsi"]
18vm_disk_thin_provisioned = true
19vm_network_card = "vmxnet3"
20common_vm_version = 19
21common_tools_upgrade_policy = true
22common_remove_cdrom = true
Then I'll configure Packer to convert the VM to a template once the build is finished:
1// Template and Content Library Settings
2common_template_conversion = true
3common_content_library_name = null
4common_content_library_ovf = false
5common_content_library_destroy = true
6common_content_library_skip_export = true
7
8// OVF Export Settings
9common_ovf_export_enabled = false
10common_ovf_export_overwrite = true
11common_ovf_export_path = ""
Next, I'll tell it where to find the Ubuntu 20.04 ISO I downloaded and placed on a datastore, along with the SHA256 checksum to confirm its integrity:
1// Removable Media Settings
2common_iso_datastore = "nuchost-local"
3iso_url = null
4iso_path = "_ISO"
5iso_file = "ubuntu-20.04.5-live-server-amd64.iso"
6iso_checksum_type = "sha256"
7iso_checksum_value = "5035be37a7e9abbdc09f0d257f3e33416c1a0fb322ba860d42d74aa75c3468d4"
And then I'll specify the VM's boot device order, as well as the boot command that will be used for loading the cloud-init
coniguration into the Ubuntu installer:
1// Boot Settings
2vm_boot_order = "disk,cdrom"
3vm_boot_wait = "4s"
4vm_boot_command = [
5 "<esc><wait>",
6 "linux /casper/vmlinuz --- autoinstall ds=\"nocloud\"",
7 "<enter><wait>",
8 "initrd /casper/initrd",
9 "<enter><wait>",
10 "boot",
11 "<enter>"
12 ]
Once the installer is booted and running, Packer will wait until the VM is available via SSH and then use these credentials to log in. (How will it be able to log in with those creds? We'll take a look at the cloud-init
configuration in just a minute...)
1// Communicator Settings
2communicator_port = 22
3communicator_timeout = "20m"
4common_ip_wait_timeout = "20m"
5common_shutdown_timeout = "15m"
6vm_shutdown_command = "sudo /usr/sbin/shutdown -P now"
7build_remove_keys = false
8build_username = "admin"
9build_password = "VMware1!"
10ssh_keys = [
11 "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpLvpxilPjpCahAQxs4RQgv+Lb5xObULXtwEoimEBpA builder"
12]
Finally, I'll create two lists of scripts that will be run on the VM once the OS install is complete. The post_install_scripts
will be run immediately after the operating system installation. The update-packages.sh
script will cause a reboot, and then the set of pre_final_scripts
will do some cleanup and prepare the VM to be converted to a template.
The last bit of this file also designates the desired version of Kubernetes to be installed.
1// Provisioner Settings
2post_install_scripts = [
3 "scripts/wait-for-cloud-init.sh",
4 "scripts/cleanup-subiquity.sh",
5 "scripts/install-ca-certs.sh",
6 "scripts/disable-multipathd.sh",
7 "scripts/disable-release-upgrade-motd.sh",
8 "scripts/persist-cloud-init-net.sh",
9 "scripts/configure-sshd.sh",
10 "scripts/install-k8s.sh",
11 "scripts/update-packages.sh"
12]
13
14pre_final_scripts = [
15 "scripts/cleanup-cloud-init.sh",
16 "scripts/enable-vmware-customization.sh",
17 "scripts/zero-disk.sh",
18 "scripts/generalize.sh"
19]
20
21// Kubernetes Settings
22k8s_version = "1.25.3"
You can find an full example of this file here.
user-data.pkrtpl.hcl
Okay, so we've covered the Packer framework that creates the VM; now let's take a quick look at the cloud-init
configuration that will allow the OS installation to proceed unattended.
See the bits that look ${ like_this }
? Those place-holders will take input from the locals
block of ubuntu-k8s.pkr.hcl
mentioned above. So that's how all the OS properties will get set, including the hostname, locale, LVM partition layout, username, password, and SSH keys.
1#cloud-config
2autoinstall:
3 version: 1
4 early-commands:
5 - sudo systemctl stop ssh
6 locale: ${ vm_guest_os_language }
7 keyboard:
8 layout: ${ vm_guest_os_keyboard }
9 network:
10 network:
11 version: 2
12 ethernets:
13 mainif:
14 match:
15 name: e*
16 critical: true
17 dhcp4: true
18 dhcp-identifier: mac
19 ssh:
20 install-server: true
21 allow-pw: true
22%{ if length( apt_mirror ) > 0 ~}
23 apt:
24 primary:
25 - arches: [default]
26 uri: "${ apt_mirror }"
27%{ endif ~}
28%{ if length( apt_packages ) > 0 ~}
29 packages:
30%{ for package in apt_packages ~}
31 - ${ package }
32%{ endfor ~}
33%{ endif ~}
34 storage:
35 config:
36 - ptable: gpt
37 path: /dev/sda
38 wipe: superblock
39 type: disk
40 id: disk-sda
41 - device: disk-sda
42 size: 1024M
43 wipe: superblock
44 flag: boot
45 number: 1
46 grub_device: true
47 type: partition
48 id: partition-0
49 - fstype: fat32
50 volume: partition-0
51 label: EFIFS
52 type: format
53 id: format-efi
54 - device: disk-sda
55 size: 1024M
56 wipe: superblock
57 number: 2
58 type: partition
59 id: partition-1
60 - fstype: xfs
61 volume: partition-1
62 label: BOOTFS
63 type: format
64 id: format-boot
65 - device: disk-sda
66 size: -1
67 wipe: superblock
68 number: 3
69 type: partition
70 id: partition-2
71 - name: sysvg
72 devices:
73 - partition-2
74 type: lvm_volgroup
75 id: lvm_volgroup-0
76 - name: home
77 volgroup: lvm_volgroup-0
78 size: 4096M
79 wipe: superblock
80 type: lvm_partition
81 id: lvm_partition-home
82 - fstype: xfs
83 volume: lvm_partition-home
84 type: format
85 label: HOMEFS
86 id: format-home
87 - name: tmp
88 volgroup: lvm_volgroup-0
89 size: 3072M
90 wipe: superblock
91 type: lvm_partition
92 id: lvm_partition-tmp
93 - fstype: xfs
94 volume: lvm_partition-tmp
95 type: format
96 label: TMPFS
97 id: format-tmp
98 - name: var
99 volgroup: lvm_volgroup-0
100 size: 4096M
101 wipe: superblock
102 type: lvm_partition
103 id: lvm_partition-var
104 - fstype: xfs
105 volume: lvm_partition-var
106 type: format
107 label: VARFS
108 id: format-var
109 - name: log
110 volgroup: lvm_volgroup-0
111 size: 4096M
112 wipe: superblock
113 type: lvm_partition
114 id: lvm_partition-log
115 - fstype: xfs
116 volume: lvm_partition-log
117 type: format
118 label: LOGFS
119 id: format-log
120 - name: audit
121 volgroup: lvm_volgroup-0
122 size: 4096M
123 wipe: superblock
124 type: lvm_partition
125 id: lvm_partition-audit
126 - fstype: xfs
127 volume: lvm_partition-audit
128 type: format
129 label: AUDITFS
130 id: format-audit
131 - name: root
132 volgroup: lvm_volgroup-0
133 size: -1
134 wipe: superblock
135 type: lvm_partition
136 id: lvm_partition-root
137 - fstype: xfs
138 volume: lvm_partition-root
139 type: format
140 label: ROOTFS
141 id: format-root
142 - path: /
143 device: format-root
144 type: mount
145 id: mount-root
146 - path: /boot
147 device: format-boot
148 type: mount
149 id: mount-boot
150 - path: /boot/efi
151 device: format-efi
152 type: mount
153 id: mount-efi
154 - path: /home
155 device: format-home
156 type: mount
157 id: mount-home
158 - path: /tmp
159 device: format-tmp
160 type: mount
161 id: mount-tmp
162 - path: /var
163 device: format-var
164 type: mount
165 id: mount-var
166 - path: /var/log
167 device: format-log
168 type: mount
169 id: mount-log
170 - path: /var/log/audit
171 device: format-audit
172 type: mount
173 id: mount-audit
174 user-data:
175 package_upgrade: true
176 disable_root: true
177 timezone: ${ vm_guest_os_timezone }
178 hostname: ${ vm_guest_os_hostname }
179 users:
180 - name: ${ build_username }
181 passwd: "${ build_password }"
182 groups: [adm, cdrom, dip, plugdev, lxd, sudo]
183 lock-passwd: false
184 sudo: ALL=(ALL) NOPASSWD:ALL
185 shell: /bin/bash
186%{ if length( ssh_keys ) > 0 ~}
187 ssh_authorized_keys:
188%{ for ssh_key in ssh_keys ~}
189 - ${ ssh_key }
190%{ endfor ~}
191%{ endif ~}
View the full file here. (The meta-data
file is empty, by the way.)
post_install_scripts
After the OS install is completed, the shell
provisioner will connect to the VM through SSH and run through some tasks. Remember how I keep talking about this build being modular? That goes down to the scripts, too, so I can use individual pieces in other builds without needing to do a lot of tweaking.
You can find all of the scripts here.
wait-for-cloud-init.sh
This simply holds up the process until the /var/lib/cloud//instance/boot-finished
file has been created, signifying the completion of the cloud-init
process:
1#!/bin/bash -eu
2echo '>> Waiting for cloud-init...'
3while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
4 sleep 1
5done
cleanup-subiquity.sh
Next I clean up any network configs that may have been created during the install process:
1#!/bin/bash -eu
2if [ -f /etc/cloud/cloud.cfg.d/99-installer.cfg ]; then
3 sudo rm /etc/cloud/cloud.cfg.d/99-installer.cfg
4 echo 'Deleting subiquity cloud-init config'
5fi
6
7if [ -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg ]; then
8 sudo rm /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg
9 echo 'Deleting subiquity cloud-init network config'
10fi
install-ca-certs.sh
The file
provisioner mentioned above helpfully copied my custom CA certs to the /tmp/certs/
folder on the VM; this script will install them into the certificate store:
1#!/bin/bash -eu
2echo '>> Installing custom certificates...'
3sudo cp /tmp/certs/* /usr/local/share/ca-certificates/
4cd /usr/local/share/ca-certificates/
5for file in *.cer; do
6 sudo mv -- "$file" "${file%.cer}.crt"
7done
8sudo /usr/sbin/update-ca-certificates
disable-multipathd.sh
This disables multipathd
:
disable-release-upgrade-motd.sh
And this one disable the release upgrade notices that would otherwise be displayed upon each login:
1#!/bin/bash -eu
2echo '>> Disabling release update MOTD...'
3sudo chmod -x /etc/update-motd.d/91-release-upgrade
persist-cloud-init-net.sh
I want to make sure that this VM keeps the same IP address following the reboot that will come in a few minutes, so I 'll set a quick cloud-init
option to help make sure that happens:
1#!/bin/sh -eu
2echo '>> Preserving network settings...'
3echo 'manual_cache_clean: True' | sudo tee -a /etc/cloud/cloud.cfg
configure-sshd.sh
Then I just set a few options for the sshd
configuration, like disabling root login:
1#!/bin/bash -eu
2echo '>> Configuring SSH'
3sudo sed -i 's/.*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
4sudo sed -i 's/.*PubkeyAuthentication.*/PubkeyAuthentication yes/' /etc/ssh/sshd_config
5sudo sed -i 's/.*PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config
install-k8s.sh
This script is a little longer and takes care of all the Kubernetes-specific settings and packages that will need to be installed on the VM.
First I enable the required overlay
and br_netfilter
modules:
1#!/bin/bash -eu
2echo ">> Installing Kubernetes components..."
3
4# Configure and enable kernel modules
5echo ".. configure kernel modules"
6cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
7overlay
8br_netfilter
9EOF
10
11sudo modprobe overlay
12sudo modprobe br_netfilter
Then I'll make some networking tweaks to enable forwarding and bridging:
1# Configure networking
2echo ".. configure networking"
3cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
4net.bridge.bridge-nf-call-iptables = 1
5net.ipv4.ip_forward = 1
6net.bridge.bridge-nf-call-ip6tables = 1
7EOF
8
9sudo sysctl --system
Next, set up containerd
as the container runtime:
1# Setup containerd
2echo ".. setup containerd"
3sudo apt-get update && sudo apt-get install -y containerd apt-transport-https jq
4sudo mkdir -p /etc/containerd
5sudo containerd config default | sudo tee /etc/containerd/config.toml
6sudo systemctl restart containerd
Then disable swap:
1# Disable swap
2echo ".. disable swap"
3sudo sed -i '/[[:space:]]swap[[:space:]]/ s/^\(.*\)$/#\1/g' /etc/fstab
4sudo swapoff -a
Next I'll install the Kubernetes components and (crucially) apt-mark hold
them so they won't be automatically upgraded without it being a coordinated change:
1# Install Kubernetes
2echo ".. install kubernetes version ${KUBEVERSION}"
3sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
4echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
5sudo apt-get update && sudo apt-get install -y kubelet="${KUBEVERSION}"-00 kubeadm="${KUBEVERSION}"-00 kubectl="${KUBEVERSION}"-00
6sudo apt-mark hold kubelet kubeadm kubectl
update-packages.sh
Lastly, I'll be sure to update all installed packages (excepting the Kubernetes ones, of course), and then perform a reboot to make sure that any new kernel modules get loaded:
1#!/bin/bash -eu
2echo '>> Checking for and installing updates...'
3sudo apt-get update && sudo apt-get -y upgrade
4echo '>> Rebooting!'
5sudo reboot
pre_final_scripts
After the reboot, all that's left are some cleanup tasks to get the VM ready to be converted to a template and subsequently cloned and customized.
cleanup-cloud-init.sh
I'll start with cleaning up the cloud-init
state:
enable-vmware-customization.sh
And then be (re)enable the ability for VMware to be able to customize the guest successfully:
1#!/bin/bash -eu
2echo '>> Enabling legacy VMware Guest Customization...'
3echo 'disable_vmware_customization: true' | sudo tee -a /etc/cloud/cloud.cfg
4sudo vmware-toolbox-cmd config set deployPkg enable-custom-scripts true
zero-disk.sh
I'll also execute this handy script to free up unused space on the virtual disk. It works by creating a file which completely fills up the disk, and then deleting that file:
1#!/bin/bash -eu
2echo '>> Zeroing free space to reduce disk size'
3sudo sh -c 'dd if=/dev/zero of=/EMPTY bs=1M || true; sync; sleep 1; sync'
4sudo sh -c 'rm -f /EMPTY; sync; sleep 1; sync'
generalize.sh
Lastly, let's do a final run of cleaning up logs, temporary files, and unique identifiers that don't need to exist in a template. This script will also remove the SSH key with the packer_key
identifier since that won't be needed anymore.
1#!/bin/bash -eu
2# Prepare a VM to become a template.
3
4echo '>> Clearing audit logs...'
5sudo sh -c 'if [ -f /var/log/audit/audit.log ]; then
6 cat /dev/null > /var/log/audit/audit.log
7 fi'
8sudo sh -c 'if [ -f /var/log/wtmp ]; then
9 cat /dev/null > /var/log/wtmp
10 fi'
11sudo sh -c 'if [ -f /var/log/lastlog ]; then
12 cat /dev/null > /var/log/lastlog
13 fi'
14sudo sh -c 'if [ -f /etc/logrotate.conf ]; then
15 logrotate -f /etc/logrotate.conf 2>/dev/null
16 fi'
17sudo rm -rf /var/log/journal/*
18sudo rm -f /var/lib/dhcp/*
19sudo find /var/log -type f -delete
20
21echo '>> Clearing persistent udev rules...'
22sudo sh -c 'if [ -f /etc/udev/rules.d/70-persistent-net.rules ]; then
23 rm /etc/udev/rules.d/70-persistent-net.rules
24 fi'
25
26echo '>> Clearing temp dirs...'
27sudo rm -rf /tmp/*
28sudo rm -rf /var/tmp/*
29
30echo '>> Clearing host keys...'
31sudo rm -f /etc/ssh/ssh_host_*
32
33echo '>> Removing Packer SSH key...'
34sed -i '/packer_key/d' ~/.ssh/authorized_keys
35
36echo '>> Clearing machine-id...'
37sudo truncate -s 0 /etc/machine-id
38if [ -f /var/lib/dbus/machine-id ]; then
39 sudo rm -f /var/lib/dbus/machine-id
40 sudo ln -s /etc/machine-id /var/lib/dbus/machine-id
41fi
42
43echo '>> Clearing shell history...'
44unset HISTFILE
45history -cw
46echo > ~/.bash_history
47sudo rm -f /root/.bash_history
Kick out the jams (or at least the build)
Now that all the ducks are nicely lined up, let's give them some marching orders and see what happens. All I have to do is open a terminal session to the folder containing the .pkr.hcl
files, and then run the Packer build command:
1packer packer build -on-error=abort -force .
Flags
The -on-error=abort
option makes sure that the build will abort if any steps in the build fail, and -force
tells Packer to delete any existing VMs/templates with the same name as the one I'm attempting to build.
And off we go! Packer will output details as it goes which makes it easy to troubleshoot if anything goes wrong.
In this case, though, everything works just fine, and I'm met with a happy "success" message!
And I can pop over to vSphere to confirm that everything looks right:
Next steps
My brand new k8s-u2004
template is ready for use! In the next post, I'll walk through the process of manually cloning this template to create my Kubernetes nodes, initializing the cluster, and installing the vSphere integrations. After that process is sorted out nicely, we'll take a look at how to use Terraform to do it all automagically. Stay tuned!