Welp vSphere 6.0 came out today and the first time I found out about was this little gem. It will download the entire suite of products based on your entitlement in My VMware. You of course have a some say in what suite, version, and location but other than that there isn’t much.

I will just go through installing it and downloading something. Nothing super complicated but it’s pretty great since I can just click one button to download the product set in a version update. I love that VMware has something for this now but I wish it was just integrated into VUM instead of having to toss it somewhere.

 

Continue reading

So after being introduced to App Volumes at PEX I thought it was absolutely amazing. I was finally released from the Thinapp hell that I was oh so accustomed to on a regular basis. After talking to each of the people in the panel I started to think of some things that should be implemented in the product that aren’t.

First off we need to understand the actual underlying framework for the product. App Volumes are client server based from everything that I have seen. You have to install the agent within the guest OS that you are either creating the App Stack on or provisioning the App Stack to and that agent checks in to the App Volumes Manager. Pretty simple right? This is what the overall architecture is from an extremely high level.

Applications are either provisioned to machine accounts or user accounts so application assignments or streaming really isn’t an issue with floating desktops. I think this is awesome except that there is just a vmdk attached to the VM in question when it is either powered on or the user logs in. This vmdk contains all of the applications that you installed during the provisioning process.

So let’s walk through what happens when we provision applications to an AppStack.

Continue reading

There have always been a severe lack of port querying tools with ESXi until nc was added to the builds and now it looks like the VCSA finally gets something along the same lines. This utility is somewhat limited to what you can and cannot do but it helps when you are in a pinch.

This script is called port-accessible.py and is located in

So if we run the python script we will observe the following:

So if I want to issue a test to say one of my hosts to make sure that we can communicate over 902 I would issue the following :

If we are using tcpdump at the same time we can see the rrequest and then we can see the response

If we try it on a port that we know isn’t open on the host we get the following

The command will sit there and hang and while tcpdumping the interface we observe constant retries but the script will not terminate until there is a total of 6 failures

Here is what is returned after the failure

I do know for a fact that the ESXi host has port 80 open so I want to try to see what a HTTP return is when I set the flag for it.

Verdict: It’s not perfect but I think that it is a great start. I would love if they could include nc on the VCSA but I am sure that they have their reasons. My main complaint is that you can’t specify the protocol that you can use when sending the traffic.

 

Digging through the appliance I found a large amount of python and shell scripts that have little to no documentation. So alot of my time has been spent executing them manually and actually checking out syntax and how the scripts were created.

I stumbled upon this directory

In that directory there are several tools that I will touch on in later posts but for right now we are going to check out pgtop.py

If you execute this script you will see something like the following

asdf\

Based on everything that I have seen within the actual script it looks like it lists all of the underlying threads for the VCDB and what the total amount of IO / status is. This will come in handy when troubleshooting underlying vPostgres issues in the future. There is a severe lack of documentation for this utility but I do like the fact that I can see the total amount of transactions per secondand  the total amounts of reads / writes. I am just happy that we have additional insight into the built in database.

According to the script there is a help option when issuing a -h which gives you the following

I don’t believe that this is like standard batch mode like we have seen before with esxtop and vimtop. The idle command is actually kind of nice since it cleans up from what we saw previously to this below

asdfasdfasdf

the –batch option apparently just gives you a snapshot instead of having the option to output the stats to. Not sure if this is working as intended but my guess would be that this will change in a later release to align better with vimtop / esxtop.

Is this supported? Probably not so do what you want, I’m not your dad.

So it appears the fancy shell we got in 6.0 is not bash since it’s some kind of beautiful candy wrapper for API troubleshooting. I haven’t particularly messed with it very much. I have been digging around in the pi shell and I found that the files that I would like to pull aren’t able to be downloaded.

Here is what I am getting via WinSCP

Error skipping startup message. Your shell is probably incompatible with the application (BASH is recommended)

this will still happen even if you set the shell manually via the advanced settings to /bin/bash

SSH to the VCSA and run pi shell

If we do the following env command we will see the following:

Well that gives us alot of information BUT I am paying specific attention to this entry

This is what is causing us to have that super awesome wrapper when we login that we usually bypass. Let’s change it to actual bash for root

Sweet! Let’s log out and log back in via Putty.

2015-02-02_10-32-58

 

Open WinSCP and make sure you have /bin/bash set as the shell under advanced

 

aaaaaa

 

I have been looking at further ways to automate alot of the work that we do within our testing environment at work. This is mainly due to the fact that I want to keep our testing set in a standardized format, while removing the dependency on a SME resource to monitor the tests / set them up.

A majority of the time we end up using full clones for all testing due to the fact that we want to test de-dupe along with performance. This means that all scripts that I will post moving forward will reference that model.

Upon further reading I found out that View PowerCLI is different from traditional PowerCLI since it can only be executed locally on the connection server opposed to remotely.

First thing we need to do is figure out a standard script that will serve multiple purposes. Here is the one that I use.

This script does the following:

  • Loads the View PowerCLI snapin
  • Asks you what you want the PoolID to be
  • Asks you what you want the Pool Display Name to be
  • Asks you what cluster you would like the VMs to be placed in
  • Asks you how many desktops you want to create

For this to work in your environment then there are going to be a few things that you need to change

Lab is the name of the Datacenter which has a hidden sub folder of vm. This is going to be the same in all environments. The sub folder is then Testing and that is where I am dumping these desktops. Here is how you would have to change it if the folder structure was as follows

2014-11-24_8-31-32

The same goes for the portion of the script where we are defining where the resource pools are

There are 2 folders in this line that aren’t visible but actually exist and it’s host and Resources

The variable $CObject is the name of the cluster where you want to put the VMs SO you really only need to change these lines if you are throwing the VMs in a resource pool below the cluster object

Template path is pretty straight forward. This is just the folder structure that is below the datacenter object and this too has a hidden folder of vm after the object.

Datastore paths are somewhat interesting . I have a standard naming convention of VDI_LUNID for all of my desktops BUT it might be different for your environment depending on how you do it. These datastores have to be present in the cluster object where you are putting the desktops

In the example above I only show one datastore but you can specify multiples by just doing the following

I haven’t really tested how well it works but it shouldn’t be any different than specifying multiple datastores within the pool settings.

Since we found that the script works with what I pasted above we should enable the Connection Server to accept remote Powershell tasks from other domain machines.

Open a Powershell administrative window and run the following

So now that, that works we could just save the script at the root of C:\ as FULL.ps1 since the next script that I will be referencing is a launcher menu that executes it based on the name and location.

Now here is the current menu that I am using that is based all in Powershell that I found somewhere online. If this is yours or you know who made it please let me know so I can give you credit.

So we can see that this when executed will prompt us for VMware View or Citrix.

12345

This top level menu is based off of these lines here

When Selecting VMware View you will get the following sub menu

123456

This sub menu is based off of this code here

I will select Create a pool which brings me here

1234567

This is based off of this section here

So since I only have the option for Full Clone it will call the function named FULL this function is really what is going to authenticate against the CS and run the script. I should mention that this user that you are specifying in the invoke-command task needs to be an administrative user in the View Admin UI. If the user isn’t then you will get a Not Authorized generic error that will just piss you off as much as it did me.

Here is the function as above in the script

This is literally the only way I have found this to actually work. If you don’t have a pre-defined testing set that you are using I believe that you can actually specify your commands directly within the { } of the script block but it would be interesting since you would have to load the snapin prior to the execution on the destination.

The one thing that I need to do is have it actually provide somewhat of feedback when it creates the desktops by doing some form of Get from the CS but I am still learning this.  This entire thing isn’t done but I have seen people talking about it lately so I just wanted to provide something that I would have found helpful as I was sifting through documentation attempting to figure this out on my own.  If you have any ways that you think that this could be done better let me know! You can always contact me on twitter @kalenarndt and I will adjust it accordingly. I always love to see how things can be done more efficiently.

This week it has been my goal to learn vCAC and during this time I have been running into old remnants of past labs that have been causing me problems. I was finally getting around to installing my IaaS VM and I kept having the VM BSOD during the install and instantly my host would PSOD. Since I don’t have a KVM I had to just hard power it and wait for it to come back up.

Checking in /var/core I find the following

a

Well that isn’t great but atleast we have a core and with a core we can find out what happened (usually). So after talking to my cousin Dillon I found out there was a tool that we can use

esxcfg-dumppart -L dumpname

This will output the results in the current working directory where you run the command. In my case it was in /var/core

b

If we look at the log we can see the following

c

After doing a bit of  talking to Dillon and Googling we can find this http://kb.vmware.com/kb/2059053 which turns out is a known issue and the only fix is to upgrade.

Turns out I haven’t updated this host in a while.

2014-11-06_19-21-29

Since I am currently only running 1 host in this environment I am using the script from http://www.v-front.de/2014/03/how-to-update-your-standalone-host-to.html with the new package number for 5.5 U2

esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.5.0-20140902001-standard

Lesson learned….keep my lab hosts up to date before I start new side project goodness.

 

I have this thing where when I deploy things in my lab at home I do it manually and without reading documentation. I find that by doing this I will run into common problems that happen during deployments. The first time I installed VCAC at home just completing the pre-reqs were a pain and took a ton of time but I learned alot more about roles and features that are required than I wanted to.

This is something that I think everyone should do but my second install is based on documentation. I found this post from the PowerCLI blog that saved me a ton of time and if I could find the person that did this I would buy them a beer and a plastic barbie mobile

 

http://blogs.vmware.com/PowerCLI/2014/09/vcac-6-1-pre-req-automation-script-released.html

 

I would hope that in future releases that this would be a part of the install but I’m glad that someone found an issue and wrote something to solve it.

I have to change storage settings all the time and I have gotten extremely tired of this and since I don’t know PowerCLI yet why not do shell scripts? These do have lines that assume you are using UCS and I haven’t put any logic in them to actually find out if you are or aren’t. These are based off of the array vendor’s best practice guides that I was given.

What I do is upload these scripts to a datastore and use clusterssh to execute them on all of the ESXi hosts. Reboot for all settings to take effect.

Copy the scripts below and paste it into notepad and save it as a .sh file or download them at the bottom.

Continue reading

This problem pissed me off for a while last night when I was trying to rebuld my lab after being on the road for a while.

I almost always deploy a VCSA and configure it with defaults on the wizard and then customize the IP, Hostname, Time, etc later. This finally came back to bite me and it was only with vCAC.

I know to regenerate the certificates after setting the IP and hostname but I didn’t realize that there are entries on the VCSA that don’t update.

When trying to join the SSO instance on the appliance from the vCAC appliance I was greeted with the following extremely generic message.

2014-11-05_10-56-48

I have had this happen before and I almost always assume it’s time or DNS so I checked the consoles

2014-11-05_11-01-31

Welp….I started going through the vCAC logs but I wasn’t able to find anything due to it not being really configured.

With a bit of googling I found this

http://brianragazzi.wordpress.com/2014/09/09/vcac-6-1-sso-configuration-gotcha/

I decided to hit the SAML Metadata URL to see if something was broken. I have seen it before not matching

https://192.168.1.220:7444/websso/SAML2/Metadata/vsphere.local

2014-11-05_12-22-20

That is not my hostname and is the old DHCP address

So I started to dive into the VCSA and check out the logs

Under /var/log/vmware/sso I was going through the vmware-identity-sts.log file and found the following messages

2014-11-05_11-05-44

That appears to be the original IP and not the 192.168.1.220 that I have in DNS. After verifying that the IP was correct and that forward and reverse lookups were working…I had to start looking at the VCSA itself.

Since the error was in the vmware-identity log I decided to look in /etc/ and there was a sub directory for vmware-identity. Going through that I found these 2 files

2014-11-05_11-17-29

And when you cat out these files you find the old entries.

2014-11-05_11-18-11

a

You need to stop vpxd and idmd because it will just keep re-writing this file over and over which I didn’t notice for a good 2 minutes.

2014-11-05_11-25-20

2014-11-05_12-13-10

Now go ahead and edit the files. I didn’t want to go through and do it by hand so I just used this string

sed -i s/192.168.1.18/192.168.1.220/g sso.properties

For the hostname file you actually want your fqdn instead of the ip

sed -i s/192.168.1.18/vc5.lab.local/g hostname.txt

That will go through and replace 192.168.1.18 with 192.168.1.220. So just change the values to your IPs / hostname and you are good to go

Go ahead and start the services again

2014-11-05_12-15-03

Let’s check the SAML metadata by heading to https://192.168.1.220:7444/websso/SAML2/Metadata/vsphere.local

2014-11-05_12-20-09

Looks like the updates took place!

Try to join vCAC to SSO again

2014-11-05_12-18-06

 

Huzzah! I can’t wait for something else to break.