How to version DLLs using TeamCity

It’s useful to include a version in DLLs that you deploy, to make it easy to check what code is actually being used where. We build our projects using TeamCity, and it’s possible to write build scripts that grab the version from an environment variable that TeamCity sets in order to insert into your AssemblyInfo.cs files. However, if you have a fairly standard project setup and are using TeamCity, there’s an easier way to do this using the AssemblyInfo Patcher build feature that TeamCity provides. This will run after all files have been checked out, automatically searching for all AssemblyInfo.cs files and replacing relevant versions (e.g. in the [assembly: AssemblyVersion("1.0.0.0")] attribute) with the current build number, before then proceeding as normal with the build, and reverting the changes once the build has finished.

To enable this feature, simply edit the configuration settings for the build you wish to add it to, click “Build Features” in the sidebar on the left, then click the “Add build feature” button and select “AssemblyInfo patcher” from the options. Configuration is fairly minimal but will allow you to customise the format of the version that will be used for various different attributes, if you don’t want to just use the default build version.

TeamCity build features

The AssemblyInfo Patcher is just a pre-configured version of the TeamCity File Content Replacer build feature, which you can use if you have more advanced needs.

Exporting and importing CSV data in RavenDB

In today’s episode of not-entirely-obvious RavenDB functionality, I was trying to use the CSV import/export functionality in order to transfer a collection from our live to our test DB, since at the time of writing the “export database” functionality seems to be broken if you try to use it to export only specified collections.

Exporting is obvious enough – simply navigate to a collection you’re interested in exporting, optionally select the documents you want to export by ticking the checkboxes, and then click the export CSV button as shown below.

export_button

This will download a file called export.csv. You can then go to Tasks -> CSV Import to import the data into another database. However, if you just import the CSV as it came out of RavenDB, you’ll find yourself creating a new collection called “export”, with all your documents from the CSV imported with auto-generated GUIDs for the document IDs.

If, instead, you’d like to import the documents into a collection with the same name and IDs as the original, as I did, you’ll need to do the following:

  1. Rename the CSV file to match the desired collection name, for example Cars.csv would import into a collection called Cars

  2. Column names in the CSV with an @ sign before them will be ignored on import. If you want to preserve IDs, open the CSV and change the @id header to id, and the IDs will be included when importing.

Please note however that RavenDB will not update the HILOs if you import data from a CSV, so you’ll need to make sure that you update those yourself separately to avoid any problems when you try to create new documents in the same collection.

Working with time intervals in PostgreSQL and Excel

By default, PostgreSQL (PSQL to its friends) formats time intervals in a way that makes it much more human-readable, which is nice until you have intervals longer than a day and you want to manipulate the data further in something like Microsoft Excel.

To make the time intervals machine-readable again, you can use something like the following (which I found here):

SELECT date_part('epoch', time_interval_field) * INTERVAL '1 second';

This extracts the number of seconds from your interval, and then multiplies it by an interval in order to convert it back to the INTERVAL type. Seems silly, but this forces the formatting into hh:mm:ss, so if you have more than 24 hours, rather than seeing 1 day 12:00:35.12312, you’ll get 36:00:35.12312.

Then all that remains is to use a custom format in Excel when loading your data:

[h]:mm:ss

The square brackets tell Excel to allow values of more than 24 hours (rather than calculating hours % 24 as it would without), and the rest is fairly self-explanatory. If you care about milliseconds, you can include them with [h]:mm:ss.00000.

How to get user IDs in Slack

For those who haven’t yet used it, Slack is a powerful communication platform, based around a chatroom-style way of interacting. I won’t go into details here, except to say that it also has an API that provides various methods to interact with the system, many of which require you to supply a user ID to act on, which is not visible in any of the admin UI sections.

Therefore the easiest way to get IDs for your users is to call the users.list method, which you can run from their test harness at https://api.slack.com/methods/users.list/test. This will give you a full list of users along with their IDs ready for you to use.

Copying/moving a Git repository including all branches to a new remote repository

Please note: this is written from the perspective of using the Git Bash shell.

  1. Clone the repository to be copied into a new location to ensure it is a fresh checkout
    git clone old-repo-url checkout-location
  2. cd into checkout-location, and run the following to checkout all branches from the remote:
    for remote in `git branch -r | grep -v HEAD`; do git checkout --track $remote; done
  3. If it doesn’t already exist, create the new remote repository e.g. new-repo
  4. Add a new remote that points to the new repository:
    git remote add new-repo new-repo-url
  5. Run the following to push all branches to the new remote:
    git push --all new-repo

Renaming a Git repository under Gitolite, keeping an alias for the old name

(If you haven’t yet seen it, Gitolite is a set of scripts for configuring a central Git server – see https://github.com/sitaramc/gitolite)

Renaming a repository is simple and fairly painless:

  1. Log onto your Git server (and any slaves), go to the repositories directory (by default it’s at ~/repositories), and rename your repository:

    mv OldRepoName.git NewRepoName.git

  2. Update your Gitolite configuration to use the new repository name, and push your changes

However, if after renaming the repository you also want to continue supporting the old name (e.g. for a transition period while users move over to the new name), then you can use the alias feature of Gitolite to do that:

  1. Log onto your Git server, and open the “rc” file, which is at ~/.gitolite.rc
  2. Look for the INPUT variable, and add (or uncomment) the line 'Alias::input', inside the array (ensuring it is before the 'Mirroring::input' element), for example:
    INPUT                       =>
            [
                # 'CpuTime::input',
                # 'Shell::input',
                 'Alias::input',
                 'Mirroring::input',
            ],
  3. Add a new variable called REPO_ALIASES with the aliases you want to support:
    REPO_ALIASES    =>
        {
            'foo'   =>  'bar',
        }

Now if you try to interact with the old repository (e.g. git@server:foo) it will redirect to the new repo (git@server:bar), giving the user a warning:

WARNING: 'foo' is an alias for 'bar'

Using Web Platform Installer behind a non-transparent proxy

[Unfortunately] we have a non-transparent proxy set up at work. Putting discussions about that aside, we’ve recently been looking at automating the provisioning of our infrastructure with Chef. When installing IIS using the community cookbook, which relies on Microsoft’s Web Platform Installer (WebPI), the proxy causes the following error:

InternetOpenUrl returned 0x80072EFD: Unknown error 12029.

After reading around many blogs including this one, I found a fix for the issue by updating the WebpiCmd.exe.config (which is located under C:\Program Files\Microsoft\Web Platform Installer or similar) to add the following inside the configuration element:

<system.net>
  <defaultProxy>
    <proxy proxyaddress="http://proxyserver:port" bypassonlocal="true" />
  </defaultProxy>
</system.net>

There’s the potential there for further configuration (e.g. for authentication perhaps) with other items available under the system.net element. If using the UI, you may need to update WebPlatformInstaller.exe.config as well.

This seemed at the time to be the “best” solution, since it made the configuration explicit, but considering how much time it eventually took (having updated the Chef cookbook to insert this configuration in), it may be worth pursuing other options of setting the proxy in the registry, for example.

Using multiple versions of the same namespace in the same Visual Studio project

I had an issue recently when using the NServiceBus.Testing library (version 2.6.0.1504, which is a bit outdated now, but the problem is still interesting). The NServiceBus.Testing library comes with the popular Rhino Mocks framework built in, so by adding a reference to NServiceBus.Testing you automatically end up with the Rhino.Mocks namespace being populated. Unfortunately, I was already using a later version of the Rhino Mocks framework in my project, so this caused a conflict. I couldn’t just remove the existing Rhino Mocks reference because I was relying on features that were not available in the version included with NServiceBus.Testing, and obviously I needed features from the NServiceBus.Testing library, so I had to find a way for the two to co-exist peacefully.

This is possible using reference aliases. By default, when you add a reference to a project, the namespaces in the reference are added with the global namespace alias. However, you can customise the alias by changing the properties of the reference, like so:

Reference alias

This stops the reference being added to the global namespace alias, which in this case meant that any references to the Rhino.Mocks namespace would resolve (correctly) to the Rhino.Mocks reference, instead of NServiceBus.Testing. When you want to use namespaces from a custom-aliased reference in a class, you need to specify the custom alias along with the extern keyword at the top, and use the alias along with a double colon :: as a prefix in the using import statements to specify where to look for a particular namespace, e.g.

extern alias NServiceBusTest;
...
using NServiceBusTest::NServiceBus.Testing;
using Rhino.Mocks;
...
namespace Tests 
{
    [TestFixture]
    public class TestClass
    {
        private Saga<MySaga> _saga;
        private ISomeDependency _dependency;

        [SetUp]
        public void CreateSaga()
        {
            _dependency = MockRepository.GenerateMock<ISomeDependency>(); // resolves to the actual Rhino Mocks reference
            _saga = Test.Saga<MySaga>(); // comes from the NServiceBus testing framework
        }
    }
}

Job’s a good ‘un.

FxCop v10.0 download

For some reason (possibly to do with dependencies), Microsoft buried the latest version of FxCop (v10.0.30319.1) inside a Windows SDK, so you have to download a ~570MB file to get at a ~3MB installer for FxCop. For future convenience, I’ve uploaded it here so you can download it from the following link:

http://files.adrianlowdon.co.uk/FxCopSetup.exe

This file comes from the x64 ISO downloaded from the page here, following a tip from the blog here.

Obviously I don’t take any responsibility or credit for this installer/program – go talk to Microsoft 🙂

Reset sa password in SQL server

By putting SQL Server into single-user mode, you can log in using any Windows administrator account with sysadmin permissions, even if an equivalent SQL login does not exist. This is very useful when you’ve lost sysadmin access (not-entirely-hypothetical example: someone set up a database, removed all sysadmin accounts except sa, and then couldn’t remember the sa password a few weeks down the line), since you can reset the sa password (or the passwords for any other account, for that matter). This post has more details, but essentially:

  1. Open the SQL Server Configuration Manager
  2. Click on the SQL Server 200{5,8} Services leaf, and stop the SQL Server instance you want to put into single-user mode
  3. Open the properties of the instance, go to the Advanced tab, and in the Startup Parameters option, add “;–m” (exactly, without quotes) to the end of the existing value
  4. Click OK, and restart the SQL Server instance
  5. You can now use sqlcmd with a Windows administrator login to execute SQL commands against the instance
  6. Once you’ve finished, don’t forget to remove the -m parameter and restart the SQL instance again to leave single-user mode 😉