• Stars
    star
    209
  • Rank 188,325 (Top 4 %)
  • Language
    PowerShell
  • Created over 4 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

vSphere with Tanzu using NSX-T Automated Lab Deployment

vSphere with Tanzu using NSX-T Automated Lab Deployment

Table of Contents

Description

Similar to other "VMware Automated Lab Deployment Scripts" (such as here, here and here), this script makes it very easy for anyone with VMware Cloud Foundation 4 (for vSphere 7.0 deployments) or VMware Tanzu (for vSphere 7.0U1 deployments) licensing to deploy vSphere with Kubernetes/Tanzu in a Nested Lab environment for learning and educational purposes. All required VMware components (ESXi, vCenter Server, NSX Unified Appliance and Edge) are automatically deployed and configured to allow enablement of vSphere with Kubernetes. For more details about vSphere with Kubernetes, please refer to the official VMware documentation here.

Below is a diagram of what is deployed as part of the solution and you simply need to have an existing vSphere environment running that is managed by vCenter Server and with enough resources (CPU, Memory and Storage) to deploy this "Nested" lab. For a complete end-to-end example including workload management enablement (post-deployment operation) and the deployment of a Tanzu Kubernetes Grid (TKG) Cluster, please have a look at the Sample Execution section below.

You are now ready to get your K8s on! 😁

Changelog

  • 02/09/2023

    • Allow additional NSX-T Edge nodes
  • 02/03/2023

    • Fix issue #29
  • 03/08/2021

    • Changes to better support vSphere 7.0 Update 1 & NSX-T 3.1.x
    • Added TKG Content Library
    • Minor misc. revisions
  • 02/21/2021

    • Verified support for vSphere 7.0 Update 1 & NSX-T 3.1
    • Fix T0 Interface creation due to API changes with NSX-T
  • 04/27/2020

    • Enable minimum vSphere with K8s Deployment. Please see this blog post for more details.
  • 04/13/2020

    • Initial Release

Requirements

  • vCenter Server running at least vSphere 6.7 or later

    • If your physical storage is vSAN, please ensure you've applied the following setting as mentioned here
  • Resource Requirements

    • Compute

      • Ability to provision VMs with up to 8 vCPU
      • Ability to provision up to 116-140 GB of memory
      • DRS-enabled Cluster (not required but vApp creation will not be possible)
    • Network

      • Single Standard or Distributed Portgroup (Native VLAN) used to deploy all VMs
        • 6 x IP Addresses for VCSA, ESXi, NSX-T UA and Edge VM
        • 5 x Consecutive IP Addresses for Kubernetes Control Plane VMs
        • 1 x IP Address for T0 Static Route
        • 32 x IP Addresses (/27) for Egress CIDR range is the minimum (must not overlap with Ingress CIDR)
        • 32 x IP Addresses (/27) for Ingress CIDR range is the minimum (must not overlap with Egress CIDR)
        • All IP Addresses should be able to communicate with each other
    • Storage

      • Ability to provision up to 1TB of storage

      Note: For detailed requirements, plesae refer to the official document here

  • VMware Cloud Foundation Licenses

  • Desktop (Windows, Mac or Linux) with latest PowerShell Core and PowerCLI 12.0 Core installed. See instructions here for more details

  • vSphere 7 & NSX-T OVAs:

FAQ

  1. What if I do not have a VMware Cloud Foundation 4 License?

  2. Can I reduce the default CPU, Memory and Storage resources?

    • You can, but it is highly recommended to leave the current defaults for the best working experience. For non-vSphere with Kubernetes usage, you can certainly tune down the resources. For vSphere Pod usage, it is possible to deploy the NSX-T Edge with just 4 vCPU, however if you are going to deploy TKG Clusters, you will need 8 vCPUs on the NSX-T Edge for proper functionality. For memory resources, you can reduce the ESXi VM memory to 16GB but if you intend to deploy K8s application/workloads, you will want to keep the default. For NSX-T memory, I have seen cases where system will become unresponsive and although you can probably tune it down a bit more, I would strongly suggest you keep the defaults unless you plan to do exhaustive testing to ensure there is no negative impact.

    UPDATE (04/27/20): Please see this blog post for more details.

  3. Can I just deploy vSphere (VCSA, ESXi) and vSAN without NSX-T and vSphere with Kubernetes?

    • Yes, simply search for the following variables and change their values to 0 to not deploy NSX-T components or run through the configurations

      $setupTanzuStoragePolicy = 0
      $deployNSXManager = 0
      $deployNSXEdge = 0
      $postDeployNSXConfig = 0
      $setupTanzu = 0
      
  4. Can I just deploy vSphere (VCSA, ESXi), vSAN and NSX-T but not configure it for vSphere with Kubernetes?

    • Yes, but some of the NSX-T automation will contain some configurations related to vSphere with Kubernetes. It does not affect the usage of NSX-T, so you can simply ignore or just delete those settings. Search for the following variables and change their values to 0 to not apply the vSphere with Kubernetes configurations

      $setupTanzu = 0
      
  5. Can the script deploy two NSX-T Edges?

    • Yes, simply append to the configuration to include the additional Edge which will be brought into the Edge Cluster during configuration. The limit 10 Edge Nodes Per Cluster
  6. How do I enable vSphere with Kubernetes after the script has completed?

    • Please refer to the official VMware documentation here with the instructions
  7. How do I troubleshoot enabling or consuming vSphere with Kubernetes?

  8. Is there a way to automate the enablement of Workload Management to a vSphere Cluster?

Configuration

Before you can run the script, you will need to edit the script and update a number of variables to match your deployment environment. Details on each section is described below including actual values used in my home lab environment.

This section describes the credentials to your physical vCenter Server in which the Tanzu lab environment will be deployed to:

$VIServer = "mgmt-vcsa-01.cpbu.corp"
$VIUsername = "[email protected]"
$VIPassword = "VMware1!"

This section describes the location of the files required for deployment.

$NestedESXiApplianceOVA = "C:\Users\william\Desktop\Tanzu\Nested_ESXi7.0_Appliance_Template_v1.ova"
$VCSAInstallerPath = "C:\Users\william\Desktop\Tanzu\VMware-VCSA-all-7.0.0-15952498"
$NSXTManagerOVA = "C:\Users\william\Desktop\Tanzu\nsx-unified-appliance-3.0.0.0.0.15946738.ova"
$NSXTEdgeOVA = "C:\Users\william\Desktop\Tanzu\nsx-edge-3.0.0.0.0.15946738.ova"

Note: The path to the VCSA Installer must be the extracted contents of the ISO

This section defines the number of Nested ESXi VMs to deploy along with their associated IP Address(s). The names are merely the display name of the VMs when deployed. At a minimum, you should deploy at least three hosts, but you can always add additional hosts and the script will automatically take care of provisioning them correctly.

$NestedESXiHostnameToIPs = @{
    "tanzu-esxi-7" = "172.17.31.113"
    "tanzu-esxi-8" = "172.17.31.114"
    "tanzu-esxi-9" = "172.17.31.115"
}

This section describes the resources allocated to each of the Nested ESXi VM(s). Depending on your usage, you may need to increase the resources. For Memory and Disk configuration, the unit is in GB.

$NestedESXivCPU = "4"
$NestedESXivMEM = "24" #GB
$NestedESXiCachingvDisk = "8" #GB
$NestedESXiCapacityvDisk = "100" #GB

This section describes the VCSA deployment configuration such as the VCSA deployment size, Networking & SSO configurations. If you have ever used the VCSA CLI Installer, these options should look familiar.

$VCSADeploymentSize = "tiny"
$VCSADisplayName = "tanzu-vcsa-3"
$VCSAIPAddress = "172.17.31.112"
$VCSAHostname = "tanzu-vcsa-3.cpbu.corp" #Change to IP if you don't have valid DNS
$VCSAPrefix = "24"
$VCSASSODomainName = "vsphere.local"
$VCSASSOPassword = "VMware1!"
$VCSARootPassword = "VMware1!"
$VCSASSHEnable = "true"

This section describes the location as well as the generic networking settings applied to Nested ESXi VCSA & NSX VMs

$VMDatacenter = "San Jose"
$VMCluster = "Cluster-01"
$VMNetwork = "SJC-CORP-MGMT"
$VMDatastore = "vsanDatastore"
$VMNetmask = "255.255.255.0"
$VMGateway = "172.17.31.253"
$VMDNS = "172.17.31.5"
$VMNTP = "pool.ntp.org"
$VMPassword = "VMware1!"
$VMDomain = "cpbu.corp"
$VMSyslog = "172.17.31.112"
$VMFolder = "Tanzu"
# Applicable to Nested ESXi only
$VMSSH = "true"
$VMVMFS = "false"

This section describes the configuration of the new vCenter Server from the deployed VCSA. Default values are sufficient.

$NewVCDatacenterName = "Tanzu-Datacenter"
$NewVCVSANClusterName = "Workload-Cluster"
$NewVCVDSName = "Tanzu-VDS"
$NewVCDVPGName = "DVPG-Management Network"

This section describes the Tanzu Configurations. Default values are sufficient.

# Tanzu Configuration
$StoragePolicyName = "tanzu-gold-storage-policy"
$StoragePolicyTagCategory = "tanzu-demo-tag-category"
$StoragePolicyTagName = "tanzu-demo-storage"
$DevOpsUsername = "devops"
$DevOpsPassword = "VMware1!"

This section describes the NSX-T configurations, the defaults values are sufficient with for the following variables which ust be defined by users and the rest can be left as defaults. $NSXLicenseKey, $NSXVTEPNetwork, $T0GatewayInterfaceAddress, $T0GatewayInterfaceStaticRouteAddress and the NSX-T Manager and Edge Sections

# NSX-T Configuration
$NSXLicenseKey = "NSX-LICENSE-KEY"
$NSXRootPassword = "VMware1!VMware1!"
$NSXAdminUsername = "admin"
$NSXAdminPassword = "VMware1!VMware1!"
$NSXAuditUsername = "audit"
$NSXAuditPassword = "VMware1!VMware1!"
$NSXSSHEnable = "true"
$NSXEnableRootLogin = "true"
$NSXVTEPNetwork = "Tanzu-VTEP" # This portgroup needs be created before running script

# Transport Node Profile
$TransportNodeProfileName = "Tanzu-Host-Transport-Node-Profile"

# Transport Zones
$TunnelEndpointName = "TEP-IP-Pool"
$TunnelEndpointDescription = "Tunnel Endpoint for Transport Nodes"
$TunnelEndpointIPRangeStart = "172.30.1.10"
$TunnelEndpointIPRangeEnd = "172.30.1.20"
$TunnelEndpointCIDR = "172.30.1.0/24"
$TunnelEndpointGateway = "172.30.1.1"

$OverlayTransportZoneName = "TZ-Overlay"
$OverlayTransportZoneHostSwitchName = "nsxswitch"
$VlanTransportZoneName = "TZ-VLAN"
$VlanTransportZoneNameHostSwitchName = "edgeswitch"

# Network Segment
$NetworkSegmentName = "Tanzu-Segment"
$NetworkSegmentVlan = "0"

# T0 Gateway
$T0GatewayName = "Tanzu-T0-Gateway"
$T0GatewayInterfaceAddress = "172.17.31.119" # should be a routable address
$T0GatewayInterfacePrefix = "24"
$T0GatewayInterfaceStaticRouteName = "Tanzu-Static-Route"
$T0GatewayInterfaceStaticRouteNetwork = "0.0.0.0/0"
$T0GatewayInterfaceStaticRouteAddress = "172.17.31.253"

# Uplink Profiles
$ESXiUplinkProfileName = "ESXi-Host-Uplink-Profile"
$ESXiUplinkProfilePolicy = "FAILOVER_ORDER"
$ESXiUplinkName = "uplink1"

$EdgeUplinkProfileName = "Edge-Uplink-Profile"
$EdgeUplinkProfilePolicy = "FAILOVER_ORDER"
$EdgeOverlayUplinkName = "uplink1"
$EdgeOverlayUplinkProfileActivepNIC = "fp-eth1"
$EdgeUplinkName = "tep-uplink"
$EdgeUplinkProfileActivepNIC = "fp-eth2"
$EdgeUplinkProfileTransportVLAN = "0"
$EdgeUplinkProfileMTU = "1600"

# Edge Cluster
$EdgeClusterName = "Edge-Cluster-01"

# NSX-T Manager Configurations
$NSXTMgrDeploymentSize = "small"
$NSXTMgrvCPU = "6" #override default size
$NSXTMgrvMEM = "24" #override default size
$NSXTMgrDisplayName = "tanzu-nsx-3"
$NSXTMgrHostname = "tanzu-nsx-3.cpbu.corp"
$NSXTMgrIPAddress = "172.17.31.118"

# NSX-T Edge Configuration
$NSXTEdgeDeploymentSize = "medium"
$NSXTEdgevCPU = "8" #override default size
$NSXTEdgevMEM = "32" #override default size
$NSXTEdgeHostnameToIPs = @{
    "tanzu-nsx-edge-3a" = "172.17.31.116"
}

Once you have saved your changes, you can now run the PowerCLI script as you normally would.

Logging

There is additional verbose logging that outputs as a log file in your current working directory vsphere-with-tanzu-nsxt-lab-deployment.log

Sample Execution

In this example below, I will be using a single /24 native VLAN (172.17.31.0/24) which all the VMs provisioned by the automation script will be connected to. It is expected that you will have a similar configuration which is the most basic configuration for POC and testing purposes.

Hostname IP Address Function
tanzu-vcsa-3.cpbu.corp 172.17.31.112 vCenter Server
tanzu-esxi-7.cpbu.corp 172.17.31.113 ESXi
tanzu-esxi-8.cpbu.corp 172.17.31.114 ESXi
tanzu-esxi-9.cpbu.corp 172.17.31.115 ESXi
tanzu-nsx-edge.cpbu.corp 172.17.31.116 NSX-T Edge
tanzu-nsx-ua.cpbu.corp 172.17.31.118 NSX-T Unified Appliance
n/a 172.17.31.119 T0 Static Route Address
n/a 172.17.31.120 to 172.17.31.125 K8s Master Control Plane VMs
n/a 172.17.31.140/27 Ingress CIDR Range
n/a 172.17.31.160/27 Egress CIDR Range

Note: Make sure Ingress/Egress CIDR ranges do NOT overlap and the IP Addresses within that block is not being used. This is important as the Egress CIDR will consume at least 15 IP Addresses for the SNAT of each namespace within the Supervisor Cluster.

Lab Deployment Script

Here is a screenshot of running the script if all basic pre-reqs have been met and the confirmation message before starting the deployment:

Here is an example output of a complete deployment:

Note: Deployment time will vary based on underlying physical infrastructure resources. In my lab, this took ~40min to complete.

Once completed, you will end up with your deployed vSphere with Kubernetes Lab which is placed into a vApp

Enable Workload Management

To consume the vSphere with Kubernetes capability in vSphere 7, you must enable workload management on a specific vSphere Cluster, which is currently not part of the automation script. The instructions below outline the steps and configuration values used in my example. For more details, please refer to the official VMware documentation here.

Step 1 - Login to vSphere UI and click on Menu->Workload Management and click on the Enable button

Step 2 - Select the Workload Cluster vSphere Cluster which should automatically show up in the Compatible list. If it does not, then it means something has gone wrong with either the selected configuration or there was an error durig deployment that you may have missed.

Step 3 - Select the Kubernetes Control Plane Size which you can use Tiny

Step 4 - Configure the Management Network by selecting the DVPG-Management-Network distributed portgroup which is automatically created for you as part of the automation. Fill out the rest of the network configuration based on your enviornment

Step 5 - Configure the Workload Network by selecting the Tanzu-VDS distributed virtual switch which is automatically created for you as part of the automation. After selecting a valid VDS, the Edge Cluster option should automatically populate with our NSX-T Edge Cluster called Edge-Cluster-01. Next, fill in your DNS server along with both the Ingress and Egress CIDR values (/27 network is required minimally or you can go larger)

Step 6 - Configure the Storage policies by selecting the tanzu-gold-storage-policy VM Storage Policy which is automatically created for you as part of the automation or any other VM Storage Policy you wish to use.

Step 7 - Finally, review workload management configuration and click Finish to begin the deployment.

This will take some time depending on your environment and you will see various errors on the screen, that is expected. In my example, it took ~26 minutes to complete. You will know when it is completely done when you refreshed the workload management UI and you see a Running status along with an accessible Control PLane Node IP Address, in my case it is 172.17.31.129

Note: In the future, I may look into automating this portion of the configuration to further accelerate the deployment. For now, it is recommended to get familiar with the concepts of vSphere with Kubernetes by going through the workflow manually so you understand what is happening.

Create Namespace

Before we can deploy a workload into Supervisor Cluste which uses vSphere Pods, we need to first create a vSphere Namespace and assign a user and VM Storage Policy.

Step 1 - Under the Namespaces tab within the workload management UI, select the Supervisor Cluster (aka vSphere Cluster enabled with workload management) and provide a name.

Step 2 - Click on Add Permissions to assign both the user [email protected] and [email protected] which was automatically created by the Automation or any other valid user within vSphere to be able to deploy workloads and click on Edit Storage to assign the VM Storage Policy tanzu-gold-storage-policy or any other valid VM Storage Policy.

Step 3 - Finally click on the Open URL under the Namespace Status tile to download kubectl and vSphere plugin and extract that onto your desktop.

Deploy Sample K8s Application

Step 1 - Login to Control Plane IP Address:

./kubectl vsphere login --server=172.17.31.129 -u [email protected] --insecure-skip-tls-verify

Step 2 - Change context into our yelb namespace:

./kubectl config use-context yelb

Switched to context "yelb".

Step 3 - Create a file called enable-all-policy.yaml with the following content:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: allow-all
spec:
 podSelector: {}
 ingress:
 - {}
 egress:
 - {}
 policyTypes:
 - Ingress
 - Egress

Apply the policy by running the following:

./kubectl apply -f enable-all-policy.yaml

networkpolicy.networking.k8s.io/allow-all created

Step 3 - Deploy our K8s Application called Yelb

./kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml

service/redis-server created
service/yelb-db created
service/yelb-appserver created
service/yelb-ui created
deployment.apps/yelb-ui created
deployment.apps/redis-server created
deployment.apps/yelb-db created
deployment.apps/yelb-appserver created

Step 4 - Access the Yelb UI by retrieving the External Load Balancer IP Address provisioned by NSX-T and then open web browser to that IP Address

./kubectl get service

NAME             TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)        AGE
redis-server     ClusterIP      10.96.0.69    <none>          6379/TCP       43s
yelb-appserver   ClusterIP      10.96.0.48    <none>          4567/TCP       42s
yelb-db          ClusterIP      10.96.0.181   <none>          5432/TCP       43s
yelb-ui          LoadBalancer   10.96.0.75    172.17.31.130   80:31924/TCP   42s

Deploy Tanzu Kubernetes Cluster

Step 1 - Create a new subscribed vSphere Content Library pointing to https://wp-content.vmware.com/v2/latest/lib.json which contains the VMware Tanzu Kubernetes Grid (TKG) Images which must be sync'ed before you can deploy a TKG Cluster.

Step 2 - Navigate to the Workload-Cluster and under Namespaces->General click on Add Library to associate the vSphere Content Library we had just created in the previous step.

Step 3 - Create a file called tkg-cluster.yaml with the following content:

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: tkg-cluster-1
  namespace: yelb
spec:
  distribution:
    version: v1.16.8
  topology:
    controlPlane:
      class: best-effort-xsmall
      count: 1
      storageClass: tanzu-gold-storage-policy
    workers:
      class: best-effort-xsmall
      count: 3
      storageClass: tanzu-gold-storage-policy
  settings:
    network:
      cni:
        name: calico
      services:
        cidrBlocks: ["198.51.100.0/12"]
      pods:
        cidrBlocks: ["192.0.2.0/16"]
    storage:
      defaultClass: tanzu-gold-storage-policy

Step 4 - Create TKG Cluster by running the following:

./kubectl apply -f tkg-cluster.yaml

tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 created

Step 5 - Login to TKG Cluster specifying by running the following:

./kubectl vsphere login --server=172.17.31.129 -u [email protected] --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name tkg-cluster-1 --tanzu-kubernetes-cluster-namespace yelb

Step 6 - Verify the TKG Cluster is ready before use by running the following command:

./kubectl get machine

NAME                                           PROVIDERID                                       PHASE
tkg-cluster-1-control-plane-2lnfb              vsphere://421465e7-bded-c92d-43ba-55e0a862b828   running
tkg-cluster-1-workers-p98cj-644dd658fd-4vtjj   vsphere://4214d30f-5fd8-eae5-7b1e-f28b8576f38e   provisioned
tkg-cluster-1-workers-p98cj-644dd658fd-bjmj5   vsphere://42141954-ecaf-dc15-544e-a7ef2b30b7e9   provisioned
tkg-cluster-1-workers-p98cj-644dd658fd-g6zxh   vsphere://4214d101-4ed0-97d3-aebc-0d0c3a7843cb   provisioned

Step 7 - Change context into tkg-cluster-1 and you are now ready to deploy K8s apps into a TKG Cluster provisioned by vSphere with Kubernetes!

./kubectl config use-context tkg-cluster-1

Network Topology

Here is view into what the networking looks like (Network Topology tab in NSX-T UI) once this is fully configured and workloads are deployed.You can see where the T0 Static Route Address is being used to connect both vSphere Pods (icons on the left) and Tanzu Kubernetes Grid (TKG) Clusters (icons on the right).

More Repositories

1

ghettoVCB

ghettoVCB
Shell
1,219
star
2

vmware-scripts

Various scripts for VMware based solutions
Perl
806
star
3

vsphere-automated-lab-deployment

vSphere Automated Lab Deployment for vSphere 6.x (6.0, 6.5 & 6.7)
PowerShell
236
star
4

homelab

VMware Community Homelabs
182
star
5

vmworld2019-session-urls

vmworld2019-session-urls
PowerShell
112
star
6

vmworld2018-session-urls

VMworld 2018 session recordings
PowerShell
76
star
7

vsphere-8-lab-deployment

Automated vSphere 8.x Lab Deployment
PowerShell
74
star
8

vsphere-with-tanzu-basic-automated-lab-deployment

Automated vSphere with Tanzu and HAProxy Lab Deployment
PowerShell
66
star
9

vmworld2017-session-urls

65
star
10

vmworld2016-session-urls

Nice summary list of all VMworld 2016 playback URLs
53
star
11

esxi-advanced-and-kernel-settings

ESXi Advanced and Kernel Settings
47
star
12

nsxt-automated-lab-deployment

NSX-T 2.0 Automated Lab Deployment with vSphere 6.x
PowerShell
46
star
13

vmware-explore-2022-session-urls

46
star
14

vcenter-event-mapping

40
star
15

vsphere-with-tanzu-nsx-advanced-lb-automated-lab-deployment

Automated vSphere with Tanzu and NSX Advanced Load Balancer Lab Deployment
PowerShell
36
star
16

usb-to-sddc

Shell
35
star
17

vmware-explore-2023-session-urls

VMware Explore 2023 Sessions
33
star
18

vcf-automated-lab-deployment

Automated VMware Cloud Foundation Lab Deployment
PowerShell
32
star
19

vsphere-with-tanzu-homelab-scripts

Simplified vSphere with Tanzu Homelab
PowerShell
30
star
20

photonos-appliance

Shell
29
star
21

vvd-quick-reference

VMware Validated Design (VVD) Quick Reference Sheet
28
star
22

vmworld2021-session-urls

26
star
23

instantclone-community-customization-scripts

Community customization scripts for Instant Clone in vSphere 6.7
PowerShell
25
star
24

customize-vsphere-web-client-6.0

Customizing the vSphere Web Client 6.0 Login UI
CSS
21
star
25

custom-virtual-appliances

References on how to build your custom Virtual Appliances w/OVF Properties
PowerShell
20
star
26

vmware-pks-automated-lab-deployment

Automated Pivotal Container Service (PKS) Lab Deployment
PowerShell
19
star
27

vyetti-vsphere-client-customization

Java
19
star
28

ax88179_178a-esxi

AX88179_178a USB NIC Driver for ESXi 5.5/6.0
C
19
star
29

harbor-appliance

PhotonOS Harbor Packer reference implementation (OVA)
Shell
17
star
30

customize-vsphere-web-client-6.5

Customizing the vSphere Web Client 6.5 Login UI
CSS
15
star
31

vmware-fah-automation

Automation examples for deploying VMware Appliance for Folding @ Home (https://flings.vmware.com/vmware-appliance-for-folding-home)
Shell
15
star
32

vcenter-authn-authz-log-examples

Log examples of vCenter Server Authentication & Authorization activities
13
star
33

vmfork-community-customization-scripts

Repository of community OS customization scripts for Instant Clone cmdlets
Shell
13
star
34

automated-nested-lab-deployment-on-vmware-cloud

Automated Nested Lab Deployment on VMware Cloud SDDCs
PowerShell
11
star
35

govc-recordings

Community Repository of govc vSphere Inventory Recordings
10
star
36

vmware-k8s-app-demo

10
star
37

customize-vsphere-web-client-6.0u2

Customizing the vSphere Web Client 6.0 Update 2 Login UI
CSS
10
star
38

migrate2vcsa-resources

List of useful resources related to the VCSA Migration Tool
8
star
39

deploy-vm-from-content-library-action

Github Action to Deploy Virtual Machine from vSphere Content Library using GOVC
Shell
8
star
40

photonos-nfs-appliance

PhotonOS Packer reference implementation (includes NFS Server)
Shell
8
star
41

powerclicore-docker-container-samples

Examples of how to run PowerCLI scripts using PowerCLI Core Docker Container
PowerShell
6
star
42

photonos-arm-nfs-appliance

PhotonOS Arm Packer reference implementation (includes NFS Server)
Shell
6
star
43

netboot-esxi

Netboot (network boot and installation) of ESXi onto Apple Mac Hardware
6
star
44

stateless-esxi-arm

Stateless ESXi-Arm + Auto Configuration
Python
6
star
45

mapping-vsan-perf-stats-to-powercli-api

6
star
46

raspberry-pi-os-ova

Scripts to build Raspberry Pi OS Virtual Appliance (OVA)
Shell
5
star
47

vmc-shorturl

List of all VMware Cloud on AWS short URLs
5
star
48

vum-umds-docker

VUM 6.5 Update Manager Download Service (UMDS) Docker Container
Shell
4
star
49

tkg-demos

4
star
50

intel-nuc-decoder

Decoding all Intel NUC "Canyon" Generation with CPU "Lake" Generation Codenames
HTML
4
star
51

vmc-packer-example

Packer Examples for VMware Cloud on AWS
4
star
52

VMware.VMC.NSXT

PowerShell Module for NSX-T on VMware Cloud on AWS
PowerShell
4
star
53

VMware.HCX

PowerShell Module for HCX and HCX Cloud
PowerShell
4
star
54

vmworld2016-eu-session-urls

VMworld 2016 Europe Session URLs
4
star
55

VMware.WorkloadManagement

PowerCLI Module for vSphere with Kubernetes
PowerShell
3
star
56

vcf-on-intel-nuc

Deploy VMware Cloud Foundation (VCF) Management Domain on Intel NUC
Shell
3
star
57

venusos-arm64-kernel-for-esxi-arm

Arm64 Kernel to boot VenusOS (32-Bit OS) on ESXi-Arm
3
star
58

vsphere-event-driven-automation-vmware-event-router

vSphere Event Driven Automation using VMware Event Router
Shell
3
star
59

VMware.WorkspaceOneAccess

PowerShell Module for VMware Workspace One Access
PowerShell
2
star
60

vmware-pks-app-demo

2
star
61

hiking

Hiking Trails
2
star
62

vmc-api-simulation

Simulating Multi-VMware Cloud on AWS API using Prism
2
star
63

create-vsphere-tag-action

Github Action to create a vSphere Tag using GOVC
Dockerfile
2
star
64

vmworld-2020-vmware-cloud-demo

Python
2
star
65

vmworld2015-3rd-party-content-library

Demo of VMware's 3rd Party Content Library using an Nginx Container
Shell
2
star
66

vmworld2020-session-urls

1
star
67

vmworld-hackathon

1
star
68

knative-on-tkg

Deploying Knative on a TKG Guest Cluster for both TKG MultiCloud and vSphere with Tanzu
1
star
69

tkg-multi-vcenter-ytt

Updating Tanzu Kubernetes Grid (TKG) manifest for multi vCenter Server deployment using YTT
Shell
1
star
70

vmware-fling-stats

HTML
1
star
71

kn-ps-telegram

PowerShell
1
star
72

VMware.DRaaS

PowerShell Module for VMware Site Recovery (DRaaS)
PowerShell
1
star
73

VMware.Community.AppTransformer

PowerShell Community Module for Application Transformer for VMware Tanzu
PowerShell
1
star
74

demo-go-webapp

Demo Go WebApp
Go
1
star
75

vsphere-event-driven-automation-tap

vSphere Event Driven Automation using Tanzu Application Platform
1
star
76

VMware.Community.Datasets

VMware Community Module for vSphere Datasets in vSphere 8
PowerShell
1
star
77

tkg-on-vmc-setup

PowerCLI to configure and setup all pre-reqs for running Tanzu Kubernetes Grid on VMware Cloud on AWS
PowerShell
1
star
78

crm-fling-docker-compose-app

Docker Compose App for running VMware Cluster Rules Manager Fling
1
star
79

VMware.Community.VPlus

VMware Community Module for interacting with vSphere+ and vSAN+ Cloud Service
PowerShell
1
star
80

vmc-tanzu-services-demo

1
star
81

cloudinit-vmware-guestinfo-examples

Examples using Cloud-init Datastore for VMware GuestInfo
1
star
82

horizon-event-mapping

VMware Horizon Event List
1
star