Migrating to dovecot

I’ve been using cyrus imap for about 15 years. I’m probably the last user. Carnegie Mellon recently announced that they were abandoning cyrus-imap. I never tried to get any help from them anyway, so I guess that isn’t a big deal, but it did make clear that I was using an out of date product. I also knew the product to be fragile and brittle, and tools for repair were not really available. Also, I had some things wrong with my cyrus files that were nagging. Overall it was past time to move on.

I didn’t look far for a replacement, in fact I didn’t do much research at all. Dovecot seemed the place to go. So after doing some reading I set about to convert. My plan was to convert first on oregano, my local development machine, and get it working there. I get almost no mail there. Then, once I thought I knew what I was doing, I would convert to it on one of the client websites I maintain where, again, there is very little mail, but there is some, and there are two or three accounts only, and the mail is mostly error things. Not very important. Finally, after those two, I would convert the mail on tarragon, where there is some 13GB of mail for about a dozen or so users. Continue reading Migrating to dovecot

Adjusting the size of the tarragondata volume

At one point I was getting low on space on tarragondata, so I added an additional physical device to the btrfs filesystem containing tarragondata.

[root@tarragon backup_scripts]# btrfs fi show
Label: 'tarragon_data' uuid: d6e4b6fc-8745-4e6e-b6b4-8548142b5154
 Total devices 2 FS bytes used 92.04GiB
 devid 1 size 120.00GiB used 120.00GiB path /dev/xvdf1
 devid 2 size 30.00GiB used 30.00GiB path /dev/xvdg

This is fine, but there are a couple of problems. The main one is that I can no longer use the EC2 snapshot capability on tarragondata, which meant that the nightly EC2 snapshot feature I was using had to be deimplemented.

But now I am about to create a new tarragon instance, and it would be really helpful to be able to snapshot tarragondata (Amazon snapshot, not btrfs snapshot) and then create a new Amazon volume with a consistent snapshot for testing.

So to do that, I am going to use the btrfs feature of adding and removing physical volumes to consolidate tarragondata on a single physical volume.

Steps are:

1) create a new 150GB physical volume in Amazon EC2 dashboard, and attach it to tarragon.

2) On tarragon, add the new physical volume to the tarragondata btrfs filesystem.

 btrfs device add /dev/xvdh /mnt/tarragondata

3) Remove the older smaller physical devices one at a time:

 btrfs device delete /dev/xvdg /mnt/tarragondata
 btrfs device delete /dev/xvdf1 /mnt/tarragondata

This successfully moved all the data and metadata to a new 150GB drive, which enabled me to snapshot it.

mdadm consistency checks

On ubuntu it seems there is an automatic mdadm array check provided in /etc/cron.d/mdadm, automatically installed with mdadm. This invokes a utility /usr/share/mdadm/checkarray and the cron is set to run this on the first Sunday of every month at 12:57am. And it is set to do this check on all arrays at one time.

This is horrible! So with 5 arrays, totalling 25TB, when this sucker fires up it quickly saturates the i/o capacity of cinnamon, slows to a crawl and settles in to run forever.

I’ve commented that out, and added my own /etc/cron.d/dee_mdadm which doesn’t do all the goofy shenanigans to try to ensure the thing runs on a Sunday (WHY?! Because the guy who wrote it doesn’t work on Sunday?). Instead, my version simply runs on the first of the month, at 12:57am, and on each month it starts the consistency check on a different array. I have 5 arrays, so 3 are checked twice a year, and 2 are checked thrice. Checking just one at a time means there is a good chance it will be done before morning, at least for the small arrays.

I don’t really think the whole consistency check idea is doing me much good, but at least this doesn’t unaccountably bring the system to its knees on the first Sunday of every month.