... rather than the current limit of 32.
Note that some backends have much lower limits, both before and after:
- jerasure_rs_cauchy caps out at 16 fragments
- the flat xor codes in general are much more prescriptive regarding
number of data and parity fragments
Those limits are still in place, but the other publicly accessible
backends all seem to support up to the new limit.
Change-Id: Icbc7b2e505442e5b3eb2b844637d5270be6de4d1
Signed-off-by: Tim Burke <tim.burke@gmail.com>
This is similar to the isa_l_rs_vand_inv backend, but takes an additional
parameter l, the number of local parities. The first g := m - l parities
are "global" parities and computed exactly the same as for isa_l_rs_vand_inv.
The last l parities are "local" parities, whose coefficients may be combined
into one additional global parity. As a result, decoding will succeed for
all possible sets of k + l - 1 fragments. Each local parity is grouped
with a set of data fragments such that any one of them may be reconstructed
from the others, rather than requiring a full k fragments. For example,
given a scheme like k = 8, m = 4, l = 2, then
* fragments 0 through 7 are data fragments
* fragments 8 and 9 are global parities
* fragments 10 and 11 are local parities
* any set of 9 unique fragments will be able to decode the original data
* any set of 4 unique fragments from 0, 1, 2, 3, and 10 will be able to
reconstruct the missing fragment and similarly
* any set of 4 unique fragments from 4, 5, 6, 7, and 11 will be able to
reconstruct the missing fragment.
If k is not evenly divisible by l, groups are sized to be within one
fragment of each other, with the larger groups having earlier data
fragments. For example, given a scheme like k = 15, m = 5, l = 4, then
the local reconstruction groups are
* fragments {0, 1, 2, 3, 16}
* fragments {4, 5, 6, 7, 17}
* fragments {8, 9, 10, 11, 18}
* fragments {12, 13, 14, 19}
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Signed-off-by: Tim Burke <tim.burke@gmail.com>
Change-Id: I2884cda24ba72d4025b175f4357ccd7ffbb48c63
This function should return either 0 or -EINSUFFFRAGS and can be used
as a quick check ahead of actually attempting reconstruction. If not
provided, default to checking for at least k fragments.
Implement the new function for flat xor codes so they can reconstruct
in more cases.
Change-Id: Ifa373e4ef8ef3caedb709c40c2b2bfd6fdf6ff7e
Signed-off-by: Tim Burke <tim.burke@gmail.com>
Use a flag in the backend's declared ops instead -- then if any new
backends get added that also need to go through the backend's decode
routine even when provided with nothing but "data" fragments, we don't
have to keep updating erasurecode.c
Change-Id: I10c9a369572ded43046e33f37bb9403b82f1b830
Signed-off-by: Tim Burke <tim.burke@gmail.com>
If a header file references erasurecode_backend.h structures, it ought
to include the header file.
Note that the already-present
#ifndef _ERASURECODE_BACKEND_H_
#define _ERASURECODE_BACKEND_H_
...
#endif // _ERASURECODE_BACKEND_H_
guards ensure we don't get multiple definitions.
Change-Id: I5b8b63452b5751295cf89236693f98378f949e18
When parity is higher than 5, the rs_vand decoding matrix is not invertible
for some combinations of missing data and parity.
Add new backend with modified gen matrix that is suited when parity >=
5.
Use rs_cauchy or new modified matrix when parity >= 5.
Related-Bug: #1639691
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Change-Id: I09abfc619893da7fd3d0740fed3586fdd46791d9
Some distributions (such as Ubuntu, and presumably Debian) provide only
libJerasure.so.2.0.0 and libJerasure.so.2 (as a symlink to the specific
version), with no libJerasure.so symlink. On those, some tests would
previously erroneously skip with
Could not open Jerasure backend. Install Jerasure or fix
LD_LIBRARY_PATH. Passing.
Change-Id: I778543e7c4cfb37d1baf1ee9a1823a809a343343
It's always taken a size_t size, ever since it was introduced in
05b1a3bde1 as crc32
Closes-Bug: #2051613
Change-Id: Ic2aabb75c5e42ae4e6eeb3dd990a6093828f5615
Some files use tabs instead of spaces for indent. Even some other
files use both of tabs and spaces which is quite confusing.
This updates all *.c files and *.h files to use spaces consistently.
Note that indent width is still inconsistent (2 vs 4), which may be
fixed later.
Change-Id: I7c0b2629785bfbaf3d0a06d8d81aa29c00168083
... and vice-versa. We'll fix up frag header values for our output
parameter from liberasurecode_get_fragment_metadata but otherwise
avoid manipulating the in-memory fragment much.
Change-Id: Idd6833bdea60e27c9a0148ee28b4a2c1070be148
Each was only really used in one place, they had some strange return types,
and recent versions of clang on OS X would refuse to compile with
erasurecode_helpers.c:531:26: error: taking address of packed member 'metadata_chksum' of
class or structure 'fragment_header_s' may result in an unaligned pointer value
[-Werror,-Waddress-of-packed-member]
return (uint32_t *) &header->metadata_chksum;
^~~~~~~~~~~~~~~~~~~~~~~
We don't really *care* about the pointer; we just want the value!
Change-Id: I8a5e42312948a75f5dd8b23b6f5ccfa7bd22eb1d
Previously, we had our own CRC that was almost but not quite like
zlib's implementation. However,
* it hasn't been subjected to the same rigor with regard to error-detection
properties and
* it may not even get used, depending upon whether zlib happens to get
loaded before or after liberasurecode.
Now, we'll use zlib's CRC-32 when writing new frags, while still
tolerating frags that were created with the old implementation.
Change-Id: Ib5ea2a830c7c23d66bf2ca404a3eb84ad00c5bc5
Closes-Bug: 1666320
The well-known idiom to compute a required number of data blocks
of size B to contain data of length d is:
(d + (B-1))/B
The code we use, with ceill(), computes the same value, but does
it in an unorthodox way. This makes a reviewer to doubt himself
and even run tests to make sure we're really computing the
obvious thing.
Apropos the reviewer confusion, the code in Phazr.IO looks weird.
It uses (word_size - hamming_distance) to compute the necessary
number of blocks... but then returns the amount of memory needed
to store blocks of a different size (word_size). We left all of it
alone and return exactly the same values that the old computation
returned.
All these computations were the only thing in the code that used
-lm, so drop that too.
Coincidentially, this patch solves the crash of distro-built
packages of liberasurecode (see Red Hat bug #1454543). But it's
a side effect. Expect a proper patch soon.
Change-Id: Ib297f6df304abf5ca8c27d3392b1107a525e0be0
Currently, there are several implementations of erasure codes that are
available within OpenStack Swift. Most, if not all, of which are based
on the Reed Solomon coding algorithm.
Phazr.IO’s Erasure Coding technology uses a patented algorithm which are
significantly more efficient and improves the speed of coding, decoding
and reconstruction. In addition, Phazr.IO Erasure Code use a non-systematic
algorithm which provides data protection at rest and in transport without
the need to use encryption.
Please contact support@phazr.io for more info on our technology.
Change-Id: I4e40d02a8951e38409ad3c604c5dd6f050fa7ea0
This is for supporting ISA-L cauchy based matrix. The difference
from isa_l_rs_vand is only the matrix to use the encode/decode calculation.
As a known issue, isa_l_rs_vand backend has constraint for the
combinations of the available fragment to be able to decode/reconstuct.
(See related change in detail)
To avoid the constraint, this patch adds another isa-l backend to use
cauchy matrix and keep the backward compatibility, this is in
another isa_l_rs_cauchy namespace.
For implementation consieration, the code is almost same except the matrix
generation fucntion so that this patch makes isa_l_common.c file for
gathering common fucntions like init/encode/decode/reconstruct. And then the
common init funciton takes an extra args "gen_matrix_func_name" for entry
point to load the fucntion by dlsym from isa-l .so file.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Related-Change: Icee788a0931fe692fe0de31fabc4ba450e338a87
Change-Id: I6eb150d9d0c3febf233570fa7729f9f72df2e9be
Currently, we have liberasurecode version info in the header and pyeclib
is using the info to detect the version. However it's a bit painful
because it requires to rebuild pyeclib c code for you to see the actual
installed version.
This addition for liberasurecode_get_version enables caller to get the
version integer from compiled shared library file (.so) and it will
rescure to re-compiled operation from pyeclib.
Change-Id: I8161ea7da3b069e83c93e11cb41ce12fa60c6f32
As well as any other callers, libersurecode_get_fragment_size should
handle the return value of liberasurecode_get_backend_instance_by_desc.
Otherwise, get_by_desc can return NULL and it causes an invalid memory
access in librerasurecode_get_fragment_size.
Change-Id: I489f8b5d049610863b5e0b477b6ff70ead245b55
Users of liberasurecode <= 1.0.7 used alloc/free helpers
(which they shouldn't have). This change is to make sure
we are still able to those older revs of programs and they
work with newer liberasurecode.