Collecting esxtop data for long intervals

Posted by

While troubleshooting performance issues with Vmware support, a constant ask that comes forward is to collect esxtop data during the issue.

This can be operational nightmare as performance issues are unpredictable and can happen when there is no one around to take actions.

The best solution to this is to use vROPs as it can collect most of the data that esxtop can. In cases where vROPs is not available a simple bash script can help


while  [ $x -lt $1 ]
	x=$(( $x + 15))
	mkdir /vmfs/volumes/$2/esxtop_data
	time=$(date +%s)
	esxtop -b -a -d 2 -n 450 | gzip -9c >/vmfs/volumes/$2/esxtop_data/esxtop_$time.csv.gz

How to use?

  1. Save the code in a file. Let’s say
  2. chmod +x on the file (chmod +x
  3. Execute


./ 15 56c125f1-f9b4b30a-31e6-e0db550bb0d6

In the above command ’15’ is the number of minutes you would like the collection to run. You can increment this number in the multiple of 15. Like 30,45…n.

The next argument that is ’56c125f1-f9b4b30a-31e6-e0db550bb0d6′ is the uuid of the VMFS volume where the script will store the output.

The arguments are positional in nature and cannot switch their place.

Just in case you were wondering why 15:

A 15 minutes csv file is the easiest to deal with when it comes to processing this data.

Anything above 15 and VMware will need to buy their support tech a laptop with more memory ?

I have personally killed my laptop a few times when dealing with csv files containing 30 minutes to 1-hour data.


The script will store its output under esxtop_data placed on the VMFS volume passed as an argument.

The output file name looks like esxtop_1582604810.csv.gz. Where highlighted is the epoch time of start for that file


  1. In the script, Please add ‘/’ in mkdir vmfs/volumes/$2/esxtop_data otherwise We need to run it from root directory always

Comments are closed.