Atp's external memory

btrfs recovery

This article has been superseded by

 I had high hopes for BTRFS. The brochure was very enticing. Checksums, snapshots, disk management... all good things for someone who fondly remembers the digital advanced file system. Unfortunately the brochure describes something that right now is a construction site.

Lately I've been getting really disenchanted by it. There are several reasons for that, but the general bugginess and instability is the main reason my enthusiasm is waning.

At home I have several filesystems on several hosts that all run BTRFS and filesystem problems are a common occurrence under both light and heavy loads. Even with latest Fedora (21) or latest kernels (3.17.6 at time of writing) on centos the situation has not improved much. It still feels like we're beta testing something which has been rushed to market.

For example running a mac timemachine share via netatalk (for the missus' computer) is a rapid way to kill a BTRFS filesystem. I think its something to do with sparse files. However even bog standard rsync of normal files  has been able to make a small 3 disk raid 1 setup go read only on me fairly rapidly.

Aha - you'll be thinking that this is some dodgy hardware problem and I'm pointing the finger unfairly at BTRFS. Well, that's not the case as I've seen this happen on multiple machines, some running centos+epel kernels and some running Fedora 20/21. Plus it's acted up at work recently too.

Btrfs clearly has "the momentum" and "the mindshare" in linux land. So, I'm now in the awkward position of not trusting my data to the officially sanctioned future. The alternatives are very unappealing.

EXT4 is solid, but based on old technology.

ZFS brings too much baggage. It'll be probably what I use next out of sheer desperation. 

XFS is looking tired. Its a first generation journal based file system lacking some of the modern conveniences. Percona say it's faster for databases. There's freeze and some other good stuff, but none of the killer features. Its basically just a better ext4.

Personally I think the decision to make XFS the default FS for redhat is very telling. BTRFS is not there yet, they also don't like the baggage of ZFS - plus it also conflicts with their NIH mindset. XFS offers improvements over EXT4 but they're marginal.

Even Jolla only use btrfs for /home, and ext4 for the important partitions (as of u10 vaarainjärvi)

It seems that everyone is waiting for BTRFS to grow up.

In the mean time, here's my recovery steps for a corrupted BTRFS filesystem.

I wonder how hard it would be to add checksums to advfs...

So, while I've been recovering my largest btrfs file system from yet another problem I thought I'd write this guide to repairing btrfs. This time it was the little one pressing the power button at the wrong moment. The scrub is almost completed now. Then its time to rsync stuff to an ext4 filesystem for safety and reboot and deal with those uncorrected errors.

Under no circumstances look at btrfsck until you've run out of other alternatives

Generally the recovery for btrfs looks like this;

Step 1 - Glance longingly over the fence at the green grass of zfs - sigh.

Step 2 -Try a normal mount and look at dmesg.

mount -t btrfs /dev/sdc /export/btrfs

If it didn't mount, do you have a missing disk? if so mount with the degraded option

mount -o degraded /dev/sdc /export/btrfs

btrfs didn't deal well with a dead disk at work.

If you see a message like;

BTRFS: couldn't mount because of unsupported optional features (40).

in your dmesg then you've accidentally booted into an older kernel. If you're running btrfs then you'll be on a recent kernel. Assuming you've got a shred of sense.

If the file system has mounted at this point move on to step 3 - scrub.

If you see lots of transid messages in the dmesg log, panic mildly - its likely that the next step will fix them.

If the file system is still not mounted check the obvious things - mount point exists, kernel version, the device name you're using actually exists and is part of the btrfs volume.

btrfs fi show should give you something like this.

$btrfs fi show
Label: archive  uuid: da455304-d586-4213-97a1-beee2deac8bc
    Total devices 2 FS bytes used 424.80GiB
    devid    1 size 465.76GiB used 465.76GiB path /dev/sdb
    devid    2 size 465.76GiB used 465.76GiB path /dev/sdc

Btrfs v3.12

if that doesn't work do a btrfs device scan

$ btrfs device scan
Scanning for Btrfs filesystems

And try again. If the disks don't show, you're out of luck at this point. Check the hardware.

Assuming you have the disks visible and all other things checked out, try a recovery mount.

$ mount -t btrfs -o recovery,nospace_cache /dev/sdc /export/btrfs

If you still don't get the disks mounted then restore from backup if you have them. Or dig elsewhere on the internet. There are plenty of other pages like this. In desperation, you may want to try the btrfsck --repair command in step 3. 

Assuming you got the filesystem mounted by some combination of the above;

Step 2 - run a scrub.

btrfs scrub start /export/btrfs

you can monitor it with

btrfs scrub status /export/btrfs

This should go through and clean up checksum errors and correct anything correctable.

I've found this deals with the majority of 'soft' errors that look so scary in dmesg.

However, you may come across uncorrectable errors. They look like this

scrub status for 66bb5f88-2c63-4d8a-83a4-e9b606571d1f
    scrub started at Sat Jan  3 11:19:13 2015, running for 17640 seconds
    total bytes scrubbed: 5.03TiB with 16 errors
    error details: csum=16
    corrected errors: 0, uncorrectable errors: 16, unverified errors: 0

with dmesg errors that may look like;

[12656.283626] BTRFS: bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 12, gen 0
[12656.284355] BTRFS: unable to fixup (regular) error at logical 9415517843456
on dev /dev/sdd
[12656.421257] BTRFS: checksum error at logical 9415517847552 on dev /dev/sdd,
sector 2587456608, root 5, inode 7383923, offset 4325376,
 length 4096, links 1 (path: mac_timemachine/
MacBook Pro.sparsebundle/bands/6262)
[12656.421273] BTRFS: bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 13, gen 0

(linebreaks added for readability)

After one of my "normal" btrfs failures I tend to get a lot of corrected errors in the scrub status. The uncorrectable ones are the problematic ones.

If you get these try a recovery mount as listed above and a second scrub. 

Failing that

Step 3 - run btrfsck --repair

At this point you've pretty much given up hope. It may work, or it may not. It worked for me once. But not all my files came back. I hope you have backups.

btrfsck --repair /dev/sdc

Before you try that command, read this page;

It lists all the things you should do first.

Here's what it looks like;

Fixed 0 roots.
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
enabling repair mode
Checking filesystem on /dev/sdc
UUID: 66bb5f88-2c63-4d8a-83a4-e9b606571d1f
cache and super generation don't match, space cache will be invalidated
found 1755560853731 bytes used err is 0
total csum bytes: 2955891292
total tree bytes: 13966262272
total fs tree bytes: 10553098240
total extent tree bytes: 266878976
btree space waste bytes: 1691246361
file data blocks allocated: 3055467118592
 referenced 3025937235968
Btrfs v3.17

So there you go.

I've transitioned the most problematic one to zfs. I'm considering whether md-raid and ext4 would make sense, but then hope triumphed over experience. I'll keep one or two btrfs filesystems around, depending on how long I can afford to spend at the computer repairing filesystems and not doing more productive things.

For reference; a partial list of other BTRFS annoyances

  • free space calculation - no come on - really?
  • something like raid-z or working raid5/6. I tried btrfs raid6 once. It ate my array. Even in this world of multi terabyte devices I'd like to get past mirroring everything. 
  • using underlying devices to mount the fs. That's untidy. I've also found that occasionally btrfs won't let me use sdc (even though its part of the array) but I have to use sdb or sdd.
  • a btrfsck program that does the above in sequence to make this blog post irrelevant.

However I'm sure people are far too busy implementing cool features for docker / todays techfad to concentrate on anything so boring as usability or reliability.

I'm also keenly aware of the standard refrain of "if you don't like it, its opensource so you can fix it". I'm not sure in these days of corporate open source I'd meet the coding guidelines.

Grumbling aside, I'm actually quite grateful to the btrfs developers and wish them well. I'll be keeping tabs on where its at.


Written by atp

Saturday 03 January 2015 at 3:50 pm

Posted in Default

Leave a Reply