Over the past weekend, I noticed via alerts that there were issues connecting to my vCenter environment. I have seen some false positives before with the alerts and vCenter connectivity, so I wasn’t too concerned at first. However, upon just a quick sanity check in trying to connect to vCenter, I received the error
Call "ServiceInstance.RetrieveContent" for object "ServiceInstance" on Server "myserver" failed.
Call "ServiceInstance.RetrieveContent" for object "ServiceInstance" on Server "myserver" failed.
The console connection to the VCSA appliance seemed responsive. Everything worked. So I decided to just do a quick bounce of the VM since it had been up and running quite a while. Also, I used this as an opportunity to add memory and RAM to the VM as well since I started out the install as a “tiny install” and now had progressed past those limits. After a quick reboot which seemed to work fine, everything was once again happy. Alerting had stopped, I could get to the interface and life seemed good at this point.
The next night however, around 2 a.m., I had alerts start picking back up on being unable to connect to vCenter. Another quick test revealed the same issue and same errors – another reboot later, and vCenter was again responsive.
Troubleshooting VMware VCSA 6 disk space issues
Digging deeper into this issue however revealed the true culprit behind the connectivity issues – disk space. Let’s take a look at troubleshooting VMware VCSA 6 disk space issues. A good overview level of visibility into the health of your VCSA appliance is to login to the administrative interface – port 5480. Under the Summary section you will see the health status.
You want your vCenter Health to look like this:
However, I didn’t get a chance to grab a screenclip, but upon logging in, I had a Critical status in the Overall Health section. Under the Health Messages I saw the following errors:
The /storage/core filesystem is low on disk space or inodes
The /storage/core filesystem is low on disk space or inodes
SSH into the VCSA Appliance
So to troubleshoot this further we need to get access to the Linux file system to start taking a deeper look. So SSH into your VCSA appliance. You will need to enable the shell to interact with the OS. To do that, run the following script:
shell.set --enabled true
shell.set --enabled true
Then type shell
This will get you to a bash prompt. Now we can start looking at the filesystem. So to get started, I simply issued the linux command to look at diskspace.
df -h
df -h
You should see something similar to the following image. Keep in mind, I am writing this after the fact and have already corrected the issue in this particular environment, so nothing is out of place in the image below.
The particular issue started with core dumps filling up the /dev/mapper/core_vg-core device as when I took my first look here, it was sitting at 100% used.
You can drill down into your /storage/core directory and issue the command for looking at file sizes:
ls -l
ls -l
I saw (2) core dumps in particular that were taking up all the drive space. These can be removed without any issue.
Stop your services, delete the core dumps, etc
Stop the VMware services on the VCSA appliance:
service vmware-vpxd stop
You can check status:
service vmware-vpxd status
Once your service is stopped, you can then clean up the core dumps:
rm -f yourcoredumpfilename
Now start your service back:
service vmware-vpxd restart
After doing this in my environment, one thing I noticed was then the log drive was also full. In my case what I did to get things up and running was to simply make the log drive bigger. This involved just shutting down the VCSA appliance, adding the space to the Log drive. If you want to know which drive maps to what, check out William Lam’s post here. In this case the Log drive is drive number 5. Using LVM in VCSA 6 now, it will automatically extend once the VM is rebooted.
service vmware-vpxd stop
You can check status:
service vmware-vpxd status
Once your service is stopped, you can then clean up the core dumps:
rm -f yourcoredumpfilename
Now start your service back:
service vmware-vpxd restart
After doing this in my environment, one thing I noticed was then the log drive was also full. In my case what I did to get things up and running was to simply make the log drive bigger. This involved just shutting down the VCSA appliance, adding the space to the Log drive. If you want to know which drive maps to what, check out William Lam’s post here. In this case the Log drive is drive number 5. Using LVM in VCSA 6 now, it will automatically extend once the VM is rebooted.
Logging tweaks
Also, just a nugget to pass along, there is a good KB on how to shrink max size of the Logs as well as the maximum number of backups to keep. I went through the steps provided and it is making a difference in the environment log space.
Final Thoughts
If you have an issue where your drive space fills up in vCenter VCSA appliance, hopefully this Troubleshooting VMware VCSA 6 disk space issues has been helpful. It might be intimidating at first to play around with the disks and files on the VCSA appliance, however, with the right KBs and tools it is not much of an issue.
0 Comments