This week, VMware posted VMSA-2024-0001 which details a critical (9.9/10) vulnerability in vRealize Aria Automation. While working to get our environment patched, I ran into an interesting error on our Aria Lifecycle appliance:
Error Code: LCMVRAVACONFIG590024 VMware Aria Automation hostname is not valid or unable to run the product specific commands via SSH on the host. Check if VMware Aria Automation is up and running. VMware Aria Automation hostname is not valid or unable to run the product specific commands via SSH on the host.
VMware has released a fix for this problem in the form of ESXi 7.0 Update 3k:
If you already face the issue, after patching the host to ESXi 7.0 Update 3k, just power on the affected Windows Server 2022 VMs. After you patch a host to ESXi 7.0 Update 3k, you can migrate a running Windows Server 2022 VM from a host of version earlier than ESXi 7.
I've been leveraging the open-source Tanzu Community Edition Kubernetes distribution for a little while now, both in my home lab and at work, so I was disappointed to learn that VMware was abandoning the project. TCE had been a pretty good fit for my needs, and now I needed to search for a replacement. VMware is offering a free version of Tanzu Kubernetes Grid as a replacement, but it comes with a license solely for non-commercial use so I wouldn't be able to use it at work.
You may have heard that there's a new vSphere release out in the wild - vSphere 8, which just reached Initial Availability this week. Upgrading the vCenter in my single-host homelab is a very straightforward task, and using the included Lifecycle Manager would make quick work of patching a cluster of hosts... but things get a little trickier with a single host. I could write the installer ISO to a USB drive, boot the host off of that, and go through the install interactively, but what if physical access to the host is kind of inconvenient?
VMware vCenter does wonders for abstracting away the layers of complexity involved in managing a large virtual infrastructure, but when something goes wrong it can be challenging to find exactly where the problem lies. And it can be even harder to proactively address potential issues before they occur.
Fortunately there's a super-handy utility which can making diagnosing vCenter significantly easier, and it comes in the form of the vSphere Diagnostic Tool Fling.
Way back in 2020, VMware released vSphere 7 Update 1 and introduced the new vSphere Clustering Services (vCLS) to improve how cluster services like the Distributed Resource Scheduler (DRS) operate. vCLS deploys lightweight agent VMs directly on the cluster being managed, and those VMs provide a decoupled and distributed control plane to offload some of the management responsibilities from the vCenter server.
That's very cool, particularly in large continent-spanning environments or those which reach into multiple clouds, but it may not make sense to add those additional workloads in resource-constrained homelabs1.
I've been doing a bit of work lately to make my vRealize Automation setup more flexible and dynamic and less dependent upon hardcoded values. To that end, I thought it was probably about time to learn how to interact with the vRA REST API. I wrote this post to share what I've learned and give a quick crash course on how to start doing things with the API.
Exploration Toolkit Swagger It can be difficult to figure out where to start when learning a new API.
ESXi-ARM Fling v1.10 Update
On July 20, 2022, VMware released a major update for the ESXi-ARM Fling. Among other fixes and improvements, this version enables in-place ESXi upgrades and adds support for the Quartz64's on-board NIC. To update, I:
Wrote the new ISO installer to another USB drive. Attached the installer drive to the USB hub, next to the existing ESXi drive. Booted the installer and selected to upgrade ESXi on the existing device.
Not long ago, I deployed a Tanzu Community Edition Kubernetes cluster in my homelab, and then I fumbled through figuring out how to log into it from a different device than the one I'd used for deploying the cluster from the tanzu cli. That setup works great for playing with Kubernetes in my homelab but I'd love to do some Kubernetes with my team at work and I really need the ability to authenticate multiple users with domain credentials for that.
When I set up my Tanzu Community Edition environment, I did so from a Linux VM since the containerized Linux environment on my Chromebook doesn't support the kind bootstrap cluster used for the deployment. But now that the Kubernetes cluster is up and running, I'd like to be able to connect to it directly without the aid of a jumpbox. How do I get the appropriate cluster configuration over to my Chromebook?