If you haven’t heard of VMware Skyline, you’re in for a treat. Skyline is free if you have production or premier support. You install a local collector, just a tiny OVF, and this feeds data up to Skyline Advisor. This data will help alert you to potential issues, before they cause you an outage, and also will allow you to easily upload logs and attach them to any open SR that you may have with GSS. The information it will feed into Skyline Advisor is currently, vSphere, NSX, vROPS, and Horizon. I’ll update this article with step by step directions.
Someone has created a VMware Fling with a pre-built Ubuntu image that you can leverage to spin up a Linux desktop pool in your Horizon View environment. Just remember that you need either Horizon Linux or Horizon Enterprise if you want to serve up Linux desktops to your end users. Take a look at the Fling at https://labs.vmware.com/flings/horizon-ova-for-ubuntu and follow their setup directions.
I recently was helping a customer deploy a new Horizon View POD and the customer environment had all of their AD user accounts in a child domain. They didn't want the parent domain name to show up for users when they logged into View, so they we used the vdmadmin.exe tool to exclude the parent domain. The steps to use the tool are below, but one things to keep in mind is that you can use this tool to hide the domain for an entire pod or just on specific connection servers.
First step is to launch a Command Prompt as an administrator.
Then CD into "c:\Program Files\VMware\VMware View\Server\tools\bin"
The path may be slightly different if you installed the connection server software in a different path or drive location.
Now you can run "vdmadmin.exe -N -domains -exclude -domain CONTOSO -add
This tells View to add the domain CONTOSO to the exclude list for the Pod, though they use the term cluster in the command line. You can then check in the View Admin site under domains and see that the CONTOSO domain is now gone.
While working with a customer to setup a stretched metro vSphere cluster, we found a bug with VMware’s Soft Affinity rules. The customer environment consisted of two data centers, each with three ESXi hosts and 3PAR arrays setup with peer persistence. The idea was to create a DRS group for data center A and one for data center B. The VM’s would be added to the DRS group for data center A, with a should live rule. They would also have a DC and DNS server for the data center B side, so that during a complete data center A outage, the VMs could be powered up on the data center B side, and the AD and DNS services would be running and able to service the VMs that were being powered back on via HA.
When we did the actual HA testing, we notice that the soft affinity rules were not being honored and VMs were being powered back up on the wrong side. This might not be a big issue for many customers, as most environments are setup with DRS set to Fully Automated, which would have migrated the VM after HA powered it one, but this customer has an application which is extremely sensitive to any latency, and because of that, they keep DRS to either Manual or Partially Automated. After a lot of troubleshooting we opened a support case with VMware’s support and they discovered that this bug exists in vSphere 6.0 and vSphere 6.5. The only work around is for the VM to be added to the DRS rules when the VM is powered off. If the VM is powered on when it is added to the soft affinity rule, the rule is ignored. The support engineer said that this will be fixed in the vSphere 6.7 release, but they had no timeline or any way to promise that this bug would be addressed in 6.0 or 6.5 builds.