Upgrade vRealize Operations Tenant App to 8.6 Using the Offline Upgrade Image

In this blog post, I am going to review the process of updating a vRealize Oprations Tenant App for VMware Cloud Director 2.6 virtual appliance to version 8.6.

  1. First, Download vROps Tenant App 8.6 offline upgrade ISO from VMware marketplace. The Upgrade bundle is available in following URL > Resource & Support > ‘Offline Upgrade: vRealize Operations TA 8.6’

    https://marketplace.cloud.vmware.com/services/details/vrealize-operations-tenant-app-for-vmware-cloud-director-2-61211?slug=true
  2. Take a snapshot of the Tenant App appliance.
  3. Connect the ISO to the Tenant App appliance. You can use ‘Client Device’, ‘Host Device’, ‘Datastore ISO file’ or ‘Content Library ISO FIle’ for mounting the ISO.
  4. n a Web browser, log in to the virtual appliance management interface (VAMI). The URL for the VAMI is https:// tenantapp_appliance_address:5480
  5. Click the Update tab.
  6. Click Settings, select Use CDROM Updates, and click Save Settings.
  7. Click Status and click Check Updates. The appliance version appears in the list of available updates.

8. Click Install Updates and click OK. It will take a while to complete the upgrade.

9. After the updates install, click the System tab and click Reboot
10. After reboot confirm the Tenant App version is 8.6.

Upgrade vRealize Operations Management Pack for vCloud Director from 5.5 to 8.6

I’ve recently upgraded vRealize operations Manager from 8.4 to 8.6. The installed version of vROPs Management Pack for VCD was 5.5, which is incompatible with vROps 8.6 and VMware Cloud Director 10.3.1. To make it compatible I had to upgrade vROps Management Pack for VCD to 8.6.


Please find the steps below to upgrade the Management Pack.

  1. Download the following vRealize Operations Management Pack for vCloud Director 8.6 from VMware Marketplace.
    • vmware-vcd-mp-8-1634219770748.pak
  2. Once downloaded, login to vRealize Operations Manager 8.6 UI – https://<vROps FQDN/IP>/ui
  3. Navigate to Data Sources > Integrations > Repository.
  4. From ‘Installed Integrations‘ locate ‘Management Pack for VMware Cloud Director.

5. Click on More Options menu and select Upgrade.

5. Select the Install the PAK file even if it is already installed check box.

This selection reloads the PAK file (Management Pack) but retains the custom preferences of the user. Also, this selection does not overwrite or update the solution alerts, symptoms, recommendations, and policies.

6. Select the Reset Default Content, overwriting to a newer version provided by this update check box.

This selection reloads the PAK file and overwrites the existing solution alerts, symptoms, recommendations, and policies with newer versions provided with the current PAK file.

WARNING: User modifications to DEFAULT Alert Definitions, Symptoms, Recommendations, Policy Definitions, Views, Dashboards, Widgets and Reports supplied by the current version of Management Pack will be overwritten. To save your modifications to default content, clone or backup the content before you proceed.

7. Click on Upload.

8. Click Next.
9. Read and accept the EULA and click Next. The install might take several minutes to complete.
10. Click Finish once the installation is completed.

11. Confirm the Upgrade is completed by checking the version of Management Pack. ‘More Options‘ > About.

12. Check and confirm the ‘Cloud Director Adapter‘ is collecting the data from VCD. The status of Cloud Director Adapter should be OK.

  • Navigate to Data Sources > Integrations > Accounts > Cloud Director Adapter.

How to enable TKG in Container Service Extension (CSE) 3.1.1?

Step1: Download the TKG OVA

Starting CSE 3.1.1, CSE allows providers to import Ubuntu 20.04 based VMware Tanzu Kubernetes Grid OVA into VCD via CSE server cli. The following link provide different TKG Templates. Download the template with the K8s version you need.
Note: Ubuntu 20.04 Kubernetes OVAs from VMware Tanzu Kubernetes Grid Versions 1.4.0, 1.3.1, 1.3.0 are supported.
Kubernetes OVAs for VMware Tanzu Kubernetes Grid 1.4.0 are available here


I’ve downloaded Ubuntu 2004 Kubernetes v1.21.2 OVA since that’s the lates available version.
File Name : ubuntu-2004-kube-v1.21.2+vmware.1-tkg.1-7832907791984498322.ova

Step2: Import TKG OVA to VCD Catalog

Upload the downloaded OVA to the CSE server. Use the following command to import the OVA in Catalog.

# cse template import -c encrypted-config.yaml -F ubuntu-2004-kube-v1.21.2+vmware.1-tkg.1-7832907791984498322.ova

Required Python version: >= 3.7.3
Installed Python version: 3.7.12 (default, Nov 23 2021, 15:49:55)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
Password for config file decryption:
Decrypting 'encrypted-config.yaml'
Validating config file 'encrypted-config.yaml'
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised.
Connected to vCloud Director (vcd.lab.com:443)
Connected to vCenter Server 'demovc.local' as '[email protected]' (demovc.local)
Config file 'encrypted-config.yaml' is valid
Uploading 'ubuntu-2004-kube-v1.21.2+vmware.1-tkg.1-7832907791984498322' to catalog 'cse-site1-k8s'
Uploaded 'ubuntu-2004-kube-v1.21.2+vmware.1-tkg.1-7832907791984498322' to catalog 'cse-site1-k8s'
Writing metadata onto catalog item ubuntu-2004-kube-v1.21.2+vmware.1-tkg.1-7832907791984498322.
Successfully imported TKGm OVA.

Step3: Restart CSE service.

I assume you’ve configured CSE to run as service. If yest restart the service.

Step4: Confirm TKG is available as option for Kubernetes Runtime

Login to the tenant portal and navigate to More > Kubernetes Container Clusters.

Click on New

How to delete the failed TKGm or Native k8s cluster in CSE?

In CSE 3.1.1, delete operation on a cluster (Native or TKG) that is in an error state (RDE.state = RESOLUTION_ERROR (or) status.phase = :FAILED), may fail with Bad request (400) or the Delete process will be stuck in ‘DELETEIN_PROGRESS’ state. The steps are given below to resolve the issue.

Step1: Assign API explorer privilege to the CSE Service Account.

Login to VCD Provider portal as Administrator.

Edit the CSE Service Role.

Navigate to Administration > Provider Access Control > Roles > CSE Service Role.

In the tenant portal check if there’re any stale vApp entries for the failed clusters. If so, please delete them.

Step2: Get the failed cluster ID through vcd cli

# vcd cse cluster list
Name        Org             Owner     VDC            K8s Runtime    K8s Version            Status
----------  --------------  --------  -------------  -------------  ---------------------  ------------------
test-tkg-1  CSE-Site1-Test  orgadmin  CSE-TEST-OVDC  TKGm           TKGm v1.21.2+vmware.1  DELETE:IN_PROGRESS
tkg         CSE-Site1-Test  orgadmin  CSE-TEST-OVDC  TKGm           TKGm v1.21.2+vmware.1  DELETE:IN_PROGRESS
tkgtest     CSE-Site1-Test  orgadmin  CSE-TEST-OVDC  TKGm           TKGm v1.21.2+vmware.1  DELETE:IN_PROGRESS
tkg-test    CSE-Site1-Test  orgadmin  CSE-TEST-OVDC  TKGm           TKGm v1.21.2+vmware.1  DELETE:IN_PROGRESS
tkg-test3   CSE-Site1-Test  orgadmin  CSE-TEST-OVDC  TKGm           TKGm v1.21.2+vmware.1  DELETE:IN_PROGRESS

# vcd cse cluster info tkg-test3 | grep uid
  uid: urn:vcloud:entity:cse:nativeCluster:9364bf18-0faa-49ce-8be7-7e92af692d1b

Step3: Run GET call to check the status

Login to VCD Provider portal with the CSE service account which has CSE Service Role assigned.
Open API Explorer.
Click on GET in difinedEntity section.

Click on TryitOut
In Description, provide the cluster UID from last step.
In the output we can see the state as PRE_CREATED.

Step3: Run the POST call resolve to resolve

Select the POST call from definedEntity section.

/1.0.0/entities/{id}/resolve
Validates the defined entity against the entity type schema.

Provide the cluster ID and run the call. The state will be changed to RESOLVED.


Step4: Run the DELETE call to delete RDE.

Povide the cluser ID and ‘false’ as value for inovkeHooks.

Please check and confirm the failed Cluster is deleted now.

#vcd cse cluster list
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised.
Name      Org             Owner     VDC            K8s Runtime    K8s Version            Status
--------  --------------  --------  -------------  -------------  ---------------------  ------------------
tkg       CSE-Site1-Test  orgadmin  CSE-TEST-OVDC  TKGm           TKGm v1.21.2+vmware.1  DELETE:IN_PROGRESS
tkgtest   CSE-Site1-Test  orgadmin  CSE-TEST-OVDC  TKGm           TKGm v1.21.2+vmware.1  DELETE:IN_PROGRESS
tkg-test  CSE-Site1-Test  orgadmin  CSE-TEST-OVDC  TKGm           TKGm v1.21.2+vmware.1  DELETE:IN_PROGRESS


How to deploy Container Service Extension (CSE) 3.1.1?

Please find the steps to deploy Container Service Extension 3.1.1.

Step 1: Deploy CentOS 7 VM

Selected CentOS 7 as the Operating System for CSE server. CentOS 7 has higher EOL than CentOS 8. You can find the installations steps for CentOS 7 here.

Please find more details on CentOS releases below.

Kindly ensure following configurations are done in CSE VM.

  • Configure DNS.
  • Configure NTP.
  • Configure SSH.
  • SSH access for root user is enabled.

Please note the following network connections are required for rest of the configurations.

  • Access to VCD URL (https) from CSE Server.
  • The Internet access from CSE server.

Step 2: Take a snapshot of CSE server VM.

It’s recommended to take a snapshot of CSE server before continuing with Python installation. It’s an optional step.

Step 3: Install Python 3.7.3 or greater

Install python 3.7.3 or greater in 3.7.x series. Please note that python 3.8.0 and above is not supported (ref: CSE doc)
The built-in python version in CentOS 7 is 2.7. So, we’ve to install the latest in 3.7.x series, at the moment version 3.7.12 is the latest. Please follow the below steps to install Python.

yum update -y
yum install -y yum-utils
yum groupinstall -y development
yum install -y gcc openssl-devel bzip2-devel libffi-devel zlib-devel xz-devel

#Install sqlite3

cd /tmp/
curl -O https://www.sqlite.org/2020/sqlite-autoconf-3310100.tar.gz
tar xvf sqlite-autoconf-3310100.tar.gz
cd sqlite-autoconf-3310100/
./configure
make install

# Install Python

cd /tmp/
curl -O https://www.python.org/ftp/python/3.7.12/Python-3.7.12.tgz
tar -xvf Python-3.7.12.tgz
cd Python-3.7.12
./configure --enable-optimizations
make altinstall
alternatives --install /usr/bin/python3 python3 /usr/local/bin/python3.7 1
alternatives --install /usr/bin/pip3 pip3 /usr/local/bin/pip3.7 1
alternatives --list

# Check Python and pip3 versions

python3 --version
pip3 --version


Step 4: Install vcd-cli

# Install and verify vcd-cli
pip3 install vcd-cli
vcd version
     vcd-cli, VMware vCloud Director Command Line Interface, 24.0.1

Step 5: Install CSE

# Install and verify cse
pip3 install container-service-extension
cse version
CSE, Container Service Extension for VMware vCloud Director, version 3.1.1

Step 6: Enable CSE client

# Create ~/.vcd-cli directory
mkdir ~/.vcd-cli

# Create profiles.yaml
cat > ~/.vcd-cli/profiles.yaml << EOF
extensions: 
- container_service_extension.client.cse
EOF

Step 7: Create CSE Service Role for CSE server management

[root@test ~]# cse create-service-role <vcd fqdn> -s
Username for System Administrator: administrator
Password for administrator:
Connecting to vCD:  <vcd fqdn>
Connected to vCD as system administrator: administrator
Creating CSE Service Role...
Successfully created CSE Service Role

Step 7: Create service account for CSE in VCD

Create a Service Account in VCD with the role ‘CSE Service Role’

Step 8: Create service account for CSE in vCenter

Create new role in vCenter with Power User + Guest Operations privilege. Assign the role to the service account for CSE.

  • Clone ‘Virtual Machine Power User (sample) role
  • Edit role
  • Select Virtual machine > Guest operations.

Step 9: Create a sample CSE config file and update it.

cse sample -o config.yaml

# vi config.yaml

mqtt:
  verify_ssl: false

vcd:
  host: vcd.vmware.com
  log: true
  password: my_secret_password
  port: 443
  username: administrator
  verify: false

vcs:
- name: vc1
  password: my_secret_password
  username: [email protected]
  verify: false

service:
  enforce_authorization: false
  legacy_mode: false
  log_wire: false
  no_vc_communication_mode: false
  processors: 15
  telemetry:
    enable: true

broker:
  catalog: cse
  ip_allocation_mode: pool
  network: my_network
  org: my_org
  remote_template_cookbook_url: https://raw.githubusercontent.com/vmware/container-service-extension-templates/master/template_v2.yaml
  storage_profile: '*'

Step 7: Encrypt Configuration file

cse encrypt config.yaml --output encrypted-config.yaml

Step 8: Validate the Configuration file

chmod 600 encrypted-config.yaml
cse check encrypted-config.yaml

Step 9: Install CSE

cse install -c encrypted-config.yaml --skip-template-creation

Step 10: Validate CSE Installation

cse check encrypted-config.yaml --check-install

Step 10: List the available templates

# cse template list -c encrypted-config.yaml
Password for config file decryption:
Decrypting 'encrypted-config.yaml'
Retrieved config from 'encrypted-config.yaml'
name                                revision  local    remote      cpu    memory  description                                                        deprecated
--------------------------------  ----------  -------  --------  -----  --------  -----------------------------------------------------------------  ------------
ubuntu-16.04_k8-1.21_weave-2.8.1           1  No       Yes           2      2048  Ubuntu 16.04, Docker-ce 20.10.7, Kubernetes 1.21.2, weave 2.8.1    No
ubuntu-16.04_k8-1.20_weave-2.6.5           2  No       Yes           2      2048  Ubuntu 16.04, Docker-ce 19.03.15, Kubernetes 1.20.6, weave 2.6.5   No
ubuntu-16.04_k8-1.19_weave-2.6.5           2  No       Yes           2      2048  Ubuntu 16.04, Docker-ce 19.03.12, Kubernetes 1.19.3, weave 2.6.5   No
ubuntu-16.04_k8-1.18_weave-2.6.5           2  No       Yes           2      2048  Ubuntu 16.04, Docker-ce 19.03.12, Kubernetes 1.18.6, weave 2.6.5   No
photon-v2_k8-1.14_weave-2.5.2              4  No       Yes           2      2048  PhotonOS v2, Docker-ce 18.06.2-6, Kubernetes 1.14.10, weave 2.5.2  Yes

Step 11: Import the latest K8S template

cse template install TEMPLATE_NAME TEMPLATE_REVISION

# cse template install ubuntu-16.04_k8-1.21_weave-2.8.1 1 -c encrypted-config.yaml

It will take a while to complete the download of template, be patient.

Downloading file from 'https://cloud-images.ubuntu.com/releases/xenial/release-20180418/ubuntu-16.04-server-cloudimg-amd64.ova' to 'cse_cache/ubuntu-16.04-server-cloudimg-amd64.ova'...
Download complete
Uploading 'ubuntu-16.04-server-cloudimg-amd64.ova' to catalog 'cse-site1-k8s'
Uploaded 'ubuntu-16.04-server-cloudimg-amd64.ova' to catalog 'cse-site1-k8s'
Deleting temporary vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'
Creating vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'
Found data file: /root/.cse_scripts/2.0.0/ubuntu-16.04_k8-1.21_weave-2.8.1_rev1/init.sh
Created vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'
Customizing vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp', vm 'ubuntu-1604-k8s1212-weave281-vm'
Found data file: /root/.cse_scripts/2.0.0/ubuntu-16.04_k8-1.21_weave-2.8.1_rev1/cust.sh
Waiting for guest tools, status: "vm='vim.VirtualMachine:vm-2296', status=guestToolsNotRunning
Waiting for guest tools, status: "vm='vim.VirtualMachine:vm-2296', status=guestToolsNotRunning
Waiting for guest tools, status: "vm='vim.VirtualMachine:vm-2296', status=guestToolsNotRunning
Waiting for guest tools, status: "vm='vim.VirtualMachine:vm-2296', status=guestToolsRunning
.....
......
......

waiting for process 1611 on vm 'vim.VirtualMachine:vm-2296' to finish (1)
waiting for process 1611 on vm 'vim.VirtualMachine:vm-2296' to finish (2)
waiting for process 1611 on vm 'vim.VirtualMachine:vm-2296' to finish (3)
waiting for process 1611 on vm 'vim.VirtualMachine:vm-2296' to finish (4)
waiting for process 1611 on vm 'vim.VirtualMachine:vm-2296' to finish (5)
waiting for process 1611 on vm 'vim.VirtualMachine:vm-2296' to finish (6)
waiting for process 1611 on vm 'vim.VirtualMachine:vm-2296' to finish (7)
waiting for process 1611 on vm 'vim.VirtualMachine:vm-2296' to finish (8)
...
...
..
/etc/kernel/postinst.d/x-grub-legacy-ec2:
Searching for GRUB installation directory ... found: /boot/grub
Searching for default file ... found: /boot/grub/default
Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
Searching for splash image ... none found, skipping ...
Found kernel: /boot/vmlinuz-4.4.0-119-generic
Found kernel: /boot/vmlinuz-4.4.0-210-generic
Found kernel: /boot/vmlinuz-4.4.0-119-generic
Updating /boot/grub/menu.lst ... done

/etc/kernel/postinst.d/zz-update-grub:
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.4.0-210-generic
Found initrd image: /boot/initrd.img-4.4.0-210-generic
Found linux image: /boot/vmlinuz-4.4.0-119-generic
Found initrd image: /boot/initrd.img-4.4.0-119-generic
done
customization completed

Customized vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp', vm 'ubuntu-1604-k8s1212-weave281-vm'
Creating K8 template 'ubuntu-16.04_k8-1.21_weave-2.8.1_rev1' from vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'
Shutting down vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'
Successfully shut down vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'
Capturing template 'ubuntu-16.04_k8-1.21_weave-2.8.1_rev1' from vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'
Created K8 template 'ubuntu-16.04_k8-1.21_weave-2.8.1_rev1' from vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'
Successfully tagged template ubuntu-16.04_k8-1.21_weave-2.8.1_rev1 with placement policy native.
Deleting temporary vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'
Deleted temporary vApp 'ubuntu-16.04_k8-1.21_weave-2.8.1_temp'

Step 12: Confirm the template is available in CSE catalog

Login to CSE Tenant portal.
Navigate to the Libraries > Catalogs > vApp Templates. We can see the newly created K8S upstream template.

Step 13: Enable Organizations for Native deployments.

The provider must explicitly enable organizational virtual datacenter(s) to host native deployments, by running the command: vcd cse ovdc enable.

vcd login <vcd> system administrator -i
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised.
Password:
administrator logged in, org: 'system', vdc: ''

# vcd cse ovdc enable <orgvdc> -n -o <organization>
# vcd cse ovdc enable TEST-OVDC -n -o Site1-Test

InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised.
OVDC Update: Updating OVDC placement policies
task: 10e70b37-5aa6-4cf9-b437-ef478bd9f06a, Operation success, result: success

Step 14: Check Create New Native Cluster is available now

Login to the VCD Tenant portal and navigate to More > Kubernetes Container Clusters.
Click on New.

We can see the option to ‘Create New Native Cluster’.

Step 15: Publish Right Bundle ‘cse:nativeCluster Entitlement’

  • The following article has details on differences between right bundle and roles.
  • Login to VCD as Provider and navigate to Administration > Right Bundles
  • Select ‘cse:nativeCluster Entitlement’
  • Create a backup of right bundle by Cloning.
    • Select the Rights Bundle cse:nativeCluster Entitlement.
    • Select Clone.
    • Keep the auto generaed name ‘Clone:cse:nativeCluster Entitlement’
  • Edit the right bundle cse:nativeCluster Entitlement
  • Select the following rights
  • Select ‘PUBLISH’
  • Select the specific Tenants from the list.

Step 16: Add CSE rights to Global Role ‘Organization Administrator’

  • Login as Provider and edit the Global Role ‘Organization Administrator’
  • Select same rights we selected in the last step.

How to update Photon OS 3.x Root Password History?


Sometimes it’s annoying when Photon OS based appliances doesn’t allow to use previously used password for root user. You may see the error ‘Password has been already used. Choose another‘ when you try to use the password which was used earlier.

root@test [ ~ ]# passwd
New password:
Retype new password:
Password has been already used. Choose another.

By default, Photon OS remember last Five passwords. You can see the setting ‘remember=3’ in /etc/pam.d/system-password

root@test [ ~ ]# cat /etc/pam.d/system-password
# Begin /etc/pam.d/system-password
password    requisite   pam_cracklib.so     minlen=8 minclass=4 difok=4 maxsequence=0 retry=3 enforce_for_root
password    requisite   pam_pwhistory.so    retry=3 remember=5 enforce_for_root
password    required    pam_unix.so         sha512 shadow use_authtok
# End /etc/pam.d/system-password

By changing ‘remember ‘ from 5 to 0 we can disable the remember password count and reset the root password.

Upgrade VMware Cloud Director App Launchpad from 2.0 to 2.1

Please find the steps to upgrade VMware Cloud Director App Launchpad from version 2.0 to 2.1

  1. Download VMware Cloud Director App Launchpad 2.1 RPM package from here.
  2. Upload it to the App Launchpad VM.
  3. Open an SSH connection to the App Launchpad VM and log in as root.
  4. Upgrade the RPM package.
[root@test ~]# rpm -U vmware-alp-2.1.0-18834930.x86_64.rpm
warning: vmware-alp-2.1.0-18834930.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 001e5cc9: NOKEY
Upgrading...

Execute 'alp upgrade' to upgrade ...

  Append the excute permission to the existing logs...

5. Run the following command to upgrade App Launchpad.

[root@test ~]# alp upgrade --admin-user administrator@system --admin-pass 'passwd'
Upgraded the plugin of App Launchpad successfully.

Upgraded the management service successfully.
  [Upgrade Task]
     CREATE_ENTITY_TYPE_CATALOG_INFO : true
     MIGRATE_CATALOGS : true
     CREATE_ENTITY_TYPE_SIZING_TEMPLATE : true
     MIGRATE_LEGACY_SIZING_TEMPLATES : true
     CREATE_ENTITY_TYPE_MARKETPLACE_BANNER : true
     CREATE_ENTITY_TYPE_ORG_METRICS : true
     UPGRADE_SERVICE_ROLE : true

6. Restart alp service and confirm its running.

[root@test~]# systemctl restart alp
[root@test ~]# systemctl status alp
● alp.service - VMware ALP Management Service
   Loaded: loaded (/usr/lib/systemd/system/alp.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-11-18 11:46:14 +01; 14s ago
 Main PID: 29334 (java)
   CGroup: /system.slice/alp.service
           └─29334 java -jar /opt/vmware/alp/alp.jar --logging.path=log

Nov 18 11:46:14 bd1-srp-al01.acs.local systemd[1]: Stopped VMware ALP Management Service.
Nov 18 11:46:14 bd1-srp-al01.acs.local systemd[1]: Started VMware ALP Management Service.

7. Diagnose deployment errors by running the /opt/vmware/alp/bin/diagnose executable file.

The diagnose tool verifies that the services are up and running and that all configuration
requirements are met.

[root@test ~]# /opt/vmware/alp/bin/diagnose

Step 1: System diagnose
--------------------------------------------------------------------------------
- App Launchpad service is initialized.


Step 2: Cloud Director diagnose
--------------------------------------------------------------------------------
- Service Account for App Launchpad is good.
- App Launchpad's extension is ready.


Step 3: MQTT diagnose
--------------------------------------------------------------------------------
- Cloud Director MQTT for extensibility is ready.


Step 4: Integration diagnose
--------------------------------------------------------------------------------
- App Launchpad API is up, and version is 2.1.0-18834930.


Step 5: App Launchpad diagnose
--------------------------------------------------------------------------------
- App Launchpad service is listening on port 8086.


8. Confirm the ALP version.

[root@test ~]# alp
NAME:
        alp - The Cloud Director App Launchpad
        (ALP) Command-line tool

USAGE:
        alp <subcommand> [flags]

VERSION:
        '2.1.0-18834930'

How to create AWS Lambda function with PowerCLI to access VMConAWS?

AWS Lambda in a nutshell

Lambda is an AWS offering to build serverless applications. It helps you to run code without provisioning or managing servers. The Lambda functions can be invoked directly through API calls or in response to events. AWS will charge the customer only for the compute time consumed by Lambda function, so no need to pay for idle time. You can learn more about lambda here.

AWS Lambda, PowerShell and PowerCLI

The code you run on AWS Lambda is uploaded as a ‘Lambda Function’. AWS Lambda natively supports PowerShell as scripting language. It helps us to write Lambda functions in PowerShell which includes commands from PowerCLI modules.

Let us see the steps to create a PowerShell based Lambda Function to get the list of VMs from a VMware Cloud on AWS SDDC. As of now the AWS Code Editor doesn’t support writing or editing PowerShell based Lambda functions. The steps discuss how to create the Lambda functions offline and deploy them in AWS Lambda.

Step 1 : Install PowerShell Core.

The Lambda functions in PowerShell require PowerShell Core 6.0, Windows PowerShell isn’t supported. If you have PowerShell Core 6.0 or above already installed, skip to step 2. The environment variable $PSVersionTable will help you to find the PowerShell version and Edition.

I’ve used Powershell Core v6.2.1 which can be downloaded from PowerShell GitHub repo.

1.1 Goto https://github.com/PowerShell/PowerShell/releases/tag/v6.2.1 > Assets > and download the Package suitable for your OS, mine is Windows 10 and the bundle ‘PowerShell-6.2.1-win-x64.msi’ worked fine.

1.2 Once downloaded, double-click the installer and follow the prompts.

Step 2 : Install .NET Core 2.1 SDK.

Because PowerShell Core is built on top of .NET Core, the Lambda support for PowerShell uses the same .NET Core 2.1 runtime for both .NET Core and PowerShell Lambda functions. The .NET Core 2.1 SDK is used by the Lambda PowerShell publishing cmdlets to create the Lambda deployment package. The .NET Core 2.1 SDK is available at .NET downloads on the Microsoft website. Be sure to install the SDK and not the runtime installation.

Step 3 : Install Powershell module ‘AWSLambdaPSCore’

Open PowerShell Core and run the following command to install ‘AWSLambdaPSCore’ module.

Install-Module AWSLambdaPSCore -Scope CurrentUser

The following are the commands available in module ‘AWSLambdaPSCore’

Step 4 : Install PowerCLI

If you already have PowerCLI modules installed in Powershell Core, skip this step.

Open PowerShell Core and run the following command

Install-Module VMware.PowerCLI

Step 5 : Create script from PowerShell Lambda Templates.

AWSLambdaPSCore module provides some Script Templates. Get-AWSPowerShellLambdaTemplate will list out the available templates.

We will use the template ‘Basic’ to create script ‘VMC-GetVM.ps1’ for getting the VM list from VMC SDDC.

Step 6 : Modify the script to get the VMs from vCenter located VMConAWS SDDC.

If you are new to Powershell Lambda its good to go through this articleto understand Input Object, Returning Data, Additional Modules and Logging.

Open the script VMC-GetVM.ps1 in the editor, I use VSCode. Replace the content of the script with the following script.

Note: Please ensure the version of modules marked with #Requiresstatement are same as the version of modules loaded in Powershell Core. If it’s different, then update the script with version details of corresponding modules which are loaded. The following command will help to find the versions of required modules.

Get-InstalledModule VMware.*.Sdk,VMware.*.common,VMware.vim,VMware.*.Cis.Core,VMware.*.core | select Name,Version

The values for the properties (venter, vCenterUser, etc) in the object $LamdaInput will be passed when we execute the function.

# PowerShell script file to be executed as a AWS Lambda function. 
# 
# When executing in Lambda the following variables will be predefined.
#   $LambdaInput - A PSObject that contains the Lambda function input data.
#   $LambdaContext - An Amazon.Lambda.Core.ILambdaContext object that contains information about the currently running Lambda environment.
#
# The last item in the PowerShell pipeline will be returned as the result of the Lambda function.
#
# To include PowerShell modules with your Lambda function, like the AWSPowerShell.NetCore module, add a "#Requires" statement 
# indicating the module and version.
#Requires -Modules @{ModuleName='VMware.VimAutomation.Sdk';ModuleVersion='11.3.0.13964823'}
#Requires -Modules @{ModuleName='VMware.VimAutomation.Common';ModuleVersion='11.3.0.13964816'}
#Requires -Modules @{ModuleName='VMware.Vim';ModuleVersion='6.7.0.13964812'}
#Requires -Modules @{ModuleName='VMware.VimAutomation.Cis.Core';ModuleVersion='11.3.0.13964830'}
#Requires -Modules @{ModuleName='VMware.VimAutomation.Core';ModuleVersion='11.3.0.13964826'}

# Uncomment to send the input event to CloudWatch Logs
#Write-Host (ConvertTo-Json -InputObject $LambdaInput -Compress -Depth 5)

$vCenter = $lambdainput.vCenter
$vCenterUser = $lambdainput.vCenterUser
$vCenterPassword = $lambdainput.vCenterpassword
Connect-VIServer $vCenter -User $vCenterUser -Password $vCenterPassword
$vmlist = get-vm
Write-Host $vmlist.Name

Save the script.

Step 7 : Reduce the size of package

In next step we will publish the Lambda Function. While publishing, a deployment package that contains our PowerShell script ‘VMC-GetVM.ps1’ and all modules declared with the #Requires statement will be created. But the deployment may fail since the package with listed PowerCLI modules will exceed Lambda’s hard limit on Package size, ie 69905067 bytes. In that situation the following error will be thrown.

To avoid that, as a workaround, we’ve to reduce the package size by cutting down the size of PowerCLI modules. When I checked ‘VMware.VimAutomation.Core’ is the largest module which is due to  Remote Console files included in the module.

Browse to the following path and move the folder ‘VMware Remote Console’ to Documents.

C:\Users\<user>\Documents\PowerShell\Modules\VMware.VimAutomation.Core\11.3.0.13964826\net45

Step 8 : Create IAM role to access CloudWatch Log and to execute Lambda.

Login to AWS Console and navigated to IAM. Create new role ‘lambda_basic_excution’ with the policy ‘CloudWatchLogsFullAccess’.

Step 9 : Publish to Lambda

To publish our new PowerShell based Lambda function, let’s execute the following command from Powershell Core.

Publish-AWSPowerShellLambda -ScriptPath <path>\VMC-GetVM.ps1 -Name RDPLockDown -Region <aws region> -IAMRoleArn lambda_basic_excution

It will take a while to create the package and deploy to AWS Lambda.

Step 10 : Configure environment variable.

Once the function is deployed, login to AWS Console and navigate to Lambda. Select the newly created function ‘VMC-GetVM’

Set the environment variable HOME to /tmp.

Step 11 : Install AWSPowerShell module.

To execute the newly created function from PowerShell Core we need the module ‘AWSPowerShell’. Run the following command to install it.

Install-Module AWSPowerShell

Step 12 : Execute the function

From the editor (VSCode) create new file LambdaExecute.ps1 and copy the following code.

$payload = @{    vCenter =  '<FQDN of vCenter in VMConAWS>'    vCenterUser = '<vCenter User>'    vCenterpassword = '<vCenter Password>'} | convertto-json Invoke-LMFunction -FunctionName VMC-GetVM  -Payload $payload

Once the execution completed you can see the list of VMs in CloudWatch Logs.

From AWS Console go to CloudWatch > Log Groups and select ‘ /aws/lambda/VMC-GetVM’ and click on latest log stream.

You can see the VMs list in the Message!

Powercli to create report with VM Tag, Category, Tools version and VM HW Version

Recently one of the community members had a requirement to generate report with the following details in .csv format.

– VM Name
– VMware Tools Version
– VM Hardware Version
– Category Names as columns and Tag names as values.

The following PowerCLI script will help to achieve this.


<# .SYNOPSIS Create .csv report with Virtual Machine Tag, Category, VMware tools version and VM Hardware details. .NOTES Author: Sreejesh Damodaran Site: www.pingforinfo.com .EXAMPLE PS> get-vmtagandcatefory.ps1

#>

# Connect to the vCenter
Connect-VIServer vCenter1 -user user1 -Password "password"

#Create vmInfo object
$vmInfo = @()
$vmInfoTemp = New-Object "PSCustomObject"
$vmInfoTemp | Add-Member -MemberType NoteProperty -Name VMName -Value ""
$vmInfoTemp | Add-Member -MemberType NoteProperty -Name ToolsVersion -Value ""
$vmInfoTemp | Add-Member -MemberType NoteProperty -Name HWVersion -Value ""
$vmCategories = Get-TagCategory
$vmCategories | %{$vmInfoTemp | Add-Member -MemberType NoteProperty -Name $_.Name -Value "" }
$vmInfo += $vmInfoTemp

get-vm | %{
$vmInfoTemp = New-Object "PSCustomObject"
$toolsVersion = Get-VMGuest $_ | select -ExpandProperty ToolsVersion
$vmInfoTemp | Add-Member -MemberType NoteProperty -Name VMName -Value $_.Name
$vmInfoTemp | Add-Member -MemberType NoteProperty -Name ToolsVersion -Value $toolsVersion
$vmInfoTemp | Add-Member -MemberType NoteProperty -Name HWVersion -Value $_.Version
$vmtags = ""
$vmtags = Get-TagAssignment -Entity $_
if($vmtags){
$vmCategories | %{
$tempVMtag = ""
$tempCategroy = $_.Name
$tempVMtag = $vmtags | Where-Object {$_.tag.category.name -match $tempCategroy}
if($tempVMtag)
{
$vmInfoTemp | Add-Member -MemberType NoteProperty -Name $tempCategroy -Value $tempVMtag.tag.name
}else {
$vmInfoTemp | Add-Member -MemberType NoteProperty -Name $tempCategroy -Value ""
}
}
}else{
$vmCategories | %{
$vmInfoTemp | Add-Member -MemberType NoteProperty -Name $_.name -Value ""
}
}
$vmInfo += $vmInfoTemp
}

$vmInfo | select * -Skip 1 | Export-Csv c:\temp\tags.csv -NoTypeInformation -UseCulture

CSV Output:

PowerCLI to find vCPU to pCPU ratio and vRAM to pRAM ratio

vsphere-PowerCLI
 
 
I was in search for a script to generate report on vCPU to pCPU ratio and vRAM to pRAM at cluster level in a vCenter. Found couple of interesting community threads which address part of the requirements. Thought to consolidate (or extract:) ) the code and created the following. The report will be generated as CSV file.

[crayon lang=”powershell”]

 

$outputFile = “C:\CPU-Memory-Ratio.csv”
$VC = “vCenter Name”

##Connect to the vCenter
Connect-VIServer $VC -User “test” -Password “test”

$Output =@()

Get-Cluster | %{
$hypCluster = $_

## get the GenericMeasureInfo for the desired properties for this cluster’s hosts
$infoCPUMEM = Get-View -ViewType HostSystem -Property Hardware.CpuInfo,Hardware.memorysize -SearchRoot $hypCluster.Id |
Select @{n=”NumCpuSockets”; e={$_.Hardware.CpuInfo.NumCpuPackages}}, @{n=”NumCpuCores”; e={$_.Hardware.CpuInfo.NumCpuCores}}, @{n=”NumCpuThreads”; e={$_.Hardware.CpuInfo.NumCpuThreads}},@{n=”PhysicalMem”; E={“”+[math]::round($_.Hardware.MemorySize / 1GB, 0)}} |
Measure-Object -Sum NumCpuSockets,NumCpuCores,NumCpuThreads,PhysicalMem

## return an object with info about VMHosts’ CPU characteristics

$temp= New-Object psobject
$datacenter = Get-Datacenter -Cluster $hypCluster.Name
$NumVMHosts = if ($infoCPUMEM) {$infoCPUMEM[0].Count} else {0}
$NumCpuSockets = ($infoCPUMEM | ?{$_.Property -eq “NumCpuSockets”}).Sum
$NumCpuCores = ($infoCPUMEM | ?{$_.Property -eq “NumCpuCores”}).Sum
$vmdetails = Get-VM -Location $hypCluster
$NumvCPU = ( $vmdetails | Measure-Object NumCpu -Sum).Sum
$VirtualMem= [Math]::Round(($vmdetails | Measure-Object MemoryGB -Sum).Sum, 2)
$PhysicalMem = ($infoCPUMEM | ?{$_.Property -eq “PhysicalMem”}).Sum

##Calculating the vCPU to pCPU ratio AND vRAM to pRAM ratio.

if ($NumvCPU -ne “0”) {$cpuRatio= “$(“{0:N2}” -f ($NumvCPU/$NumCpuCores))” + “:1”}
if ($VirtualMem -ne “0”) {$memRatio= “$(“{0:N2}” -f ($VirtualMem/$PhysicalMem))” + “:1”}

$temp | Add-Member -MemberType Noteproperty “Datacenter” -Value $datacenter
$temp | Add-Member -MemberType Noteproperty “ClusterName” -Value $hypCluster.Name
$temp | Add-Member -MemberType Noteproperty “NumVMHosts” -Value $NumVMHosts
$temp | Add-Member -MemberType Noteproperty “NumPCPUSockets” -Value $NumCpuSockets
$temp | Add-Member -MemberType Noteproperty “NumPCPUCores” -Value $NumCpuCores
$temp | Add-Member -MemberType Noteproperty “NumvCPU” -Value $NumvCPU
$temp | Add-Member -MemberType Noteproperty “vCPU-pCPUCoreRatio” -Value $cpuRatio
$temp | Add-Member -MemberType Noteproperty “PhysicalMem(GB)” -Value $PhysicalMem
$temp | Add-Member -MemberType Noteproperty “VirtualMem(GB)” -Value $VirtualMem
$temp | Add-Member -MemberType Noteproperty “vRAM-pRAMRatio” -Value $memRatio

$Output+=$temp

}
$Output | Sort-Object Account | Export-Csv -NoTypeInformation $outputFile

[/crayon]

Output in table format :

[table id=4 /]

Ref : https://communities.vmware.com/thread/456555?start=0&tstart=0