![]() |
Adobe Source Libraries
1.43
|
Typedefs | |
template<std::size_t Bits> | |
using | fnv_traits = typename detail::fnv_base_traits< detail::rollup(Bits)> |
template<std::size_t Bits> | |
using | fnvtype = typename fnv_traits< Bits >::value_type |
Functions | |
template<std::size_t Bits, typename Iterator , typename Predicate > | |
fnvtype< Bits > | fnv1a (Iterator first, Predicate p) |
template<std::size_t Bits, typename Iterator > | |
fnvtype< Bits > | fnv1a (Iterator first, Iterator last) |
template<std::size_t Bits, typename Container > | |
fnvtype< Bits > | fnv1a (Container c) |
Implementation of the FNV (Fowler Noll Vo) class of hash algorithms. From the homepage (http://www.isthe.com/chongo/tech/comp/fnv/):
The FNV speed allows one to quickly hash lots of data while maintaining a reasonable collision rate. The high dispersion of the FNV hashes makes them well suited for hashing nearly identical strings such as URLs, hostnames, filenames, text, IP addresses, etc.
There are two implementations of FNV: FNV-1 and FNV-1a. The latter has a minor change over the former which results in better avalanche characteristics, reducing collisions at no cost to performance. As such FNV-1a is implemented here; FNV-1 is not.
According to the author (on the same website above) these algorithms are in the public domain:
FNV hash algorithms and source code have been released into the public domain. The authors of the FNV algorithmm [sic] look deliberate steps to disclose the algorhtm [sic] in a public forum soon after it was invented. More than a year passed after this public disclosure and the authors deliberatly [sic] took no steps to patent the FNV algorithm. Therefore it is safe to say that the FNV authors have no patent claims on the FNV algorithm as published.
All versions of the algorithm are available (32, 64, 128, 256, 512, and 1024) however only the 32- and 64-bit versions have no external dependencies. Beyond 64-bits the implementations leverage the Boost multiprecision library. If you would prefer to remove the dependency (and implicitly limit yourself to the 32- and 64-bit algorithms), define the ADOBE_FNV_NO_BIGINTS
macro.
You can also specify a result of any bitsize between 1 and 1024 (or 1 to 64 with ADOBE_FNV_NO_BIGINTS
defined), and the algorithm will produce a truncated result. When computing a hash that is a bitsize other than those explicitly defined, the recommendation from the above website is to use xor_fold
:
((value >> (sizeof(T) * 8 - Bits)) ^ value) & ((1 << Bits) - 1)
However, xor_fold
is not really necessary given the excellent distribution of the algorithm class, and a straightforward mask will result in bits that are as well distributed as result from xor_fold
. So we mask.
using fnv_traits = typename detail::fnv_base_traits<detail::rollup(Bits)> |
Traits class used by adobe::fnv1a.
using fnvtype = typename fnv_traits<Bits>::value_type |
Integral type used to return the result of a call to adobe::fnv1a.
The size of the type depends on the version of fnv called.
fnvtype<Bits> adobe::fnv1a | ( | Iterator | first, |
Predicate | p | ||
) |
fnvtype<Bits> adobe::fnv1a | ( | Iterator | first, |
Iterator | last | ||
) |