Dynamically changing build numbers by branch in TeamCity

We use feature branches, and build all of them on our TeamCity CI setup. Every build can be deployed on our test servers, but it’s useful to be able to quickly distinguish which ones came from master and which came from other branches. I took a script from here and edited it to simply append -alpha to the end of the version number for any non-master build:

$branch = "%teamcity.build.branch%"

if ($branch.Contains("/")) 
  $branch = $branch.substring($branch.lastIndexOf("/") + 1)

Write-Host "Building from $branch branch"

if ($branch -ne "master") 
  $buildNumber = "%build.number%-alpha"
  Write-Host "Appending alpha to build number"
  Write-Host "##teamcity[buildNumber '$buildNumber']"
  Write-Host "Leaving build number as-is"

This makes it really obvious when a build has come from a feature (or other) branch, to avoid accidentally deploying it etc. You could probably easily extend this to pull a version number from the branch name as well, or similar.

To use this across multiple builds, I just set up a build template with the script stored in the first build step, which you can then apply to builds either individually or across a project.

MongoDB backup script

You can run this on a scheduled task and it will take a backup of all databases, zip it and copy it to a backup location, and automatically remove older backups after 7 days.

REM Create filename for output
set filename=mongodb-backup-%DATE:~6,4%_%DATE:~3,2%_%DATE:~0,2%__%TIME:~0,2%_%TIME:~3,2%_%TIME:~6,2%

REM Export the databases
@echo Dumping databases to "%filename%"
"c:\Program Files\MongoDB\Server\3.4\bin\mongodump.exe" --username <username> --password <password> --out %filename%

REM Create backup zip file
@echo Creating backup zip file from "%filename%"
"c:\Program Files\7-Zip\7z.exe" a -tzip "%filename%.zip" "%filename%"

REM Delete the backup directory (leave the ZIP file). The /q tag makes sure we don't get prompted for questions 
@echo Deleting original backup directory "%filename%"
rmdir "%filename%" /s /q

REM Move zip file to backup location
@echo Moving zip to backup location
move "%filename%.zip" E:\mongodb-backups

REM Delete files older than 7 days
@echo Deleting older backups
forfiles /P "E:\mongodb-backups" /S /M *.zip /D -7 /C "cmd /c del @PATH"


Diagnosing issues with Elasticsearch

We use logstash in our infrastructure, which ingests logs and outputs transformed data to a store of your choice, which for us is Elasticsearch. We then use Grafana to visualise some of this data. Recently we noticed that Grafana was struggling to perform the queries it needed, which led to investigating and fixing issues with Elasticsearch.

Elasticsearch uses a well-featured API for administration, which you can easily access using curl. The following all assume that you are running Elasticsearch locally on the default port 9200.

First things first: to get basic information about the Elasticsearch instance, run

curl http://localhost:9200/

which will give you the version for example, so that you can be sure you’re looking at the right version of any documentation. To get the status of the cluster, you can run

curl http://localhost:9200/_cluster/health?pretty

which should give you a response like this:

  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 95,
  "active_shards" : 95,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 98.95833333333334

Including the pretty parameter will return a more human-readable response. There are various bits of useful information here. Firstly, the status will give you a general idea of how your cluster is doing: red means there are indices that are not available to query, yellow means that there are some that are not replicated, and green means everything is good. When the cluster first loads, you will notice that the shards go through the initializing_shards phase as they are spinning up, until they become active_shards. For us, around a third of the shards would reach the active phase before the service would restart itself.

You can find out information about the individual shards by running

curl http://localhost:9200/_cat/shards?v

which should give you an output something like this:

index               shard prirep state           docs   store ip        node
logstash-2018.02.27 0     p      STARTED      6219832   7.3gb SL-GvDk
logstash-2018.04.04 0     p      INITIALIZING        SL-GvDk
logstash-2018.04.04 0     r      UNASSIGNED
logstash-2018.02.28 0     p      STARTED      5765860   6.8gb SL-GvDk
logstash-2018.02.13 0     p      STARTED      6810856   7.9gb SL-GvDk

Using the v parameter will give you headers in your output for many queries. There will be a line per shard in your cluster. INITIALIZING shards are currently spinning up on a node, and will then move to STARTED when ready to query. UNASSIGNED shards are ones which are waiting to be assigned to a node. We are only running a single-node cluster, so the replicas will never be assigned, meaning we would never get a green status. You can force all your existing indices to not have any replicas by running

curl -XPUT http://localhost:9200/_settings -d '{ "index.number_of_replicas": 0 }'

However, this would not affect any indexes that are then created in future. To ensure that replicas are not created in the future, you need to edit the index template being used to ensure that new ones will be created with the correct settings.

This cleaned up all our replica shards which would never be assigned, but still didn’t solve the problem of the endless rebooting. Looking at the logs (stored at /var/log/elasticsearch by default) showed that several options were erroring with

java.lang.OutOfMemoryError: Java heap space

So I increased the heap space by updating the JVM options, which can be found at /etc/elasticsearch/jvm.options; there are two options which need to be updated and kept the same as each other:


This sets the heap to 2GB, which for us was sufficient to get everything working again after restarting the service:

/etc/init.d/elasticsearch restart

Now that everything was running again, we could review whether we needed all those indices still; you can list all indices by running

curl http://localhost:9200/_cat/indices

and then close any you don’t want any more by running something like:

curl -X POST http://localhost:9200/logstash-2018.04.*/_close

where wildcards are accepted. However, this won’t delete the data on disk, it will just free up memory. If you want to permanently delete the data from disk as well, you can run:

curl -X DELETE http://localhost:9200/logstash-2018.04.*