When I set up my Tanzu Community Edition environment, I did so from a Linux VM since the containerized Linux environment on my Chromebook doesn't support the kind
bootstrap cluster used for the deployment. But now that the Kubernetes cluster is up and running, I'd like to be able to connect to it directly without the aid of a jumpbox. How do I get the appropriate cluster configuration over to my Chromebook?
The Tanzu CLI actually makes that pretty easy - once I figured out the appropriate incantation. I just needed to use the tanzu management-cluster kubeconfig get
command on my Linux VM to export the kubeconfig
of my management (tce-mgmt
) cluster to a file:
1tanzu management-cluster kubeconfig get --admin --export-file tce-mgmt-kubeconfig.yaml
I then used scp
to pull the file from the VM into my local Linux environment, and proceeded to install kubectl
and the tanzu
CLI (making sure to also enable shell auto-completion along the way!).
Now I'm ready to import the configuration locally with tanzu login
on my Chromebook:
1❯ tanzu login --kubeconfig ~/projects/tanzu-homelab/tanzu-setup/tce-mgmt-kubeconfig.yaml --context tce-mgmt-admin@tce-mgmt --name tce-mgmt
2✔ successfully logged in to management cluster using the kubeconfig tce-mgmt
Use the absolute path
Pass in the full path to the exported kubeconfig file. This will help the Tanzu CLI to load the correct config across future terminal sessions.
Even though that's just importing the management cluster it actually grants access to both the management and workload clusters:
1❯ tanzu cluster list
2 NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN
3 tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none> dev
4
5❯ tanzu cluster get tce-work
6 NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
7 tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none>
8ℹ
9
10Details:
11
12NAME READY SEVERITY REASON SINCE MESSAGE
13/tce-work True 24h
14├─ClusterInfrastructure - VSphereCluster/tce-work True 24h
15├─ControlPlane - KubeadmControlPlane/tce-work-control-plane True 24h
16│ └─Machine/tce-work-control-plane-vc2pb True 24h
17└─Workers
18 └─MachineDeployment/tce-work-md-0
19 └─Machine/tce-work-md-0-687444b744-crc9q True 24h
20
21❯ tanzu management-cluster get
22 NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
23 tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management
24
25
26Details:
27
28NAME READY SEVERITY REASON SINCE MESSAGE
29/tce-mgmt True 23h
30├─ClusterInfrastructure - VSphereCluster/tce-mgmt True 23h
31├─ControlPlane - KubeadmControlPlane/tce-mgmt-control-plane True 23h
32│ └─Machine/tce-mgmt-control-plane-7pwz7 True 23h
33└─Workers
34 └─MachineDeployment/tce-mgmt-md-0
35 └─Machine/tce-mgmt-md-0-745b858d44-5llk5 True 23h
36
37
38Providers:
39
40 NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE
41 capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23
42 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23
43 capi-system cluster-api CoreProvider cluster-api v0.3.23
44 capv-system infrastructure-vsphere InfrastructureProvider vsphere v0.7.10
And I can then tell kubectl
about the two clusters:
1❯ tanzu management-cluster kubeconfig get tce-mgmt --admin
2Credentials of cluster 'tce-mgmt' have been saved
3You can now access the cluster by running 'kubectl config use-context tce-mgmt-admin@tce-mgmt'
4
5❯ tanzu cluster kubeconfig get tce-work --admin
6Credentials of cluster 'tce-work' have been saved
7You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
And sure enough, there are my contexts:
1❯ kubectl config get-contexts
2CURRENT NAME CLUSTER AUTHINFO NAMESPACE
3 tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin
4* tce-work-admin@tce-work tce-work tce-work-admin
5
6❯ kubectl get nodes -o wide
7NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
8tce-work-control-plane-vc2pb Ready control-plane,master 23h v1.21.2+vmware.1 192.168.1.132 192.168.1.132 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6
9tce-work-md-0-687444b744-crc9q Ready <none> 23h v1.21.2+vmware.1 192.168.1.133 192.168.1.133 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6
Perfect, now I can get back to Tanzuing from my Chromebook without having to jump through a VM. (And, thanks to Tailscale, I can even access my TCE resources remotely!)