A low-tech way to apply ECC for disk storage
kragen at pobox.com
kragen at pobox.com
Mon Nov 21 03:37:02 EST 2005
Suppose you're in the not terribly unusual position of having a bunch
of big files (megabytes at least) on a bunch of unreliable disks.
Every once in a while, one of the disks fails, or user error deletes a
file. But you would prefer not to lose the data in any of these
files, because it's valuable. What do you do?
A straightforward and reliable solution is to keep multiple copies of
each file, say two or three, and a message-digest of each file (such
as MD5 or SHA-1) so you can detect silent corruption. When you notice
a disk has failed or a file is missing, you make another copy of one
of the remaining copies.
This scheme is appealingly simple, but the copies are expensive, which
creates a temptation to skimp on the mirroring in some applications.
An error-correcting code transforms some number of bits N into a
larger number of bits M, and can do the reverse transformation
(usually very quickly) even if you flip or erase some of the M bits.
Typically you can erase up to any M-N bits or flip up to any
ceil((M-N-1)/2) bits. In many codes, the first N of the M bits are
identical to the original M bits --- you just append M-N bits to the
original bits. (I think these codes are called RCH codes.)
The simplest such code is the parity code: M=N+1, and you XOR your N
bits together to get your Mth bit. This can detect single-bit errors,
and if you know which bit is lost, it can correct them, too.
So if you have N files of about the same size, you can read through
them in parallel and generate M-N "ECC files" containing those extra
error-correction bits. (For concreteness, we could talk about
RS(28,24), a Reed-Solomon code that takes 24 8-bit symbols at a time
and adds 4 more to get a total of 28 8-bit symbols.) When you get
past the ends of some of the files, you can pretend they have zero
These ECC files contain enough information to replace any one of the
existing files, should it be lost or corrupted. Depending on which
code you use, you may be able to replace several lost files, as long
as the others aren't lost. (With RS(28,24), you could lose up to four
of the existing files, as long as you knew they were lost.)
So, generate your ECC files from a group of files that are on separate
disks (ideally on separate machines, in separate racks, in separate
hosting centers, served by separate networks, on separate continents)
and store them on other files on yet other disks (preferably as
separate as possible, as well). Then you can recover any of the files
if they get lost, at relatively low overhead cost.
Finding the original files
There's a practical difficulty if you have a bunch of ECC files full
of apparently random binary data, though, which is that they're only
useful for data reconstruction if you can find the original files they
were generated from (and the other ECC files as well). So I suggest
beginning the file with some kind of boilerplate header, saying "This
is an ECC file generated by ECC-file-mangler version 2.4, available at
http://www.example.com/eccfile; see the end of the file for more
details", and appending a list of metadata about the input files to
the end of the file, so that you can figure out which of your zillions
of not-lost files you need to read to reconstruct your lost files.
Under the title "filesystem metadata indexing, yet again", at
I published a program that will run through your filesystem to compute
various metadata about all your files. The result looks something
This contains, among other things, the file's original name, the
number of bytes in it. Metadata like this appended to the end of the
file would enable you to figure out which among your thousands of big
files the ECC file pertains to, and perhaps what to call the
More information about the Kragen-tol