/*1* Copyright (c) Yann Collet, Facebook, Inc.2* All rights reserved.3*4* This source code is licensed under both the BSD-style license (found in the5* LICENSE file in the root directory of this source tree) and the GPLv2 (found6* in the COPYING file in the root directory of this source tree).7* You may select, at your option, one of the above-listed licenses.8*/910#ifndef DICTBUILDER_H_00111#define DICTBUILDER_H_0011213#if defined (__cplusplus)14extern "C" {15#endif161718/*====== Dependencies ======*/19#include <stddef.h> /* size_t */202122/* ===== ZDICTLIB_API : control library symbols visibility ===== */23#ifndef ZDICTLIB_VISIBILITY24# if defined(__GNUC__) && (__GNUC__ >= 4)25# define ZDICTLIB_VISIBILITY __attribute__ ((visibility ("default")))26# else27# define ZDICTLIB_VISIBILITY28# endif29#endif30#if defined(ZSTD_DLL_EXPORT) && (ZSTD_DLL_EXPORT==1)31# define ZDICTLIB_API __declspec(dllexport) ZDICTLIB_VISIBILITY32#elif defined(ZSTD_DLL_IMPORT) && (ZSTD_DLL_IMPORT==1)33# define ZDICTLIB_API __declspec(dllimport) ZDICTLIB_VISIBILITY /* It isn't required but allows to generate better code, saving a function pointer load from the IAT and an indirect jump.*/34#else35# define ZDICTLIB_API ZDICTLIB_VISIBILITY36#endif3738/*******************************************************************************39* Zstd dictionary builder40*41* FAQ42* ===43* Why should I use a dictionary?44* ------------------------------45*46* Zstd can use dictionaries to improve compression ratio of small data.47* Traditionally small files don't compress well because there is very little48* repetition in a single sample, since it is small. But, if you are compressing49* many similar files, like a bunch of JSON records that share the same50* structure, you can train a dictionary on ahead of time on some samples of51* these files. Then, zstd can use the dictionary to find repetitions that are52* present across samples. This can vastly improve compression ratio.53*54* When is a dictionary useful?55* ----------------------------56*57* Dictionaries are useful when compressing many small files that are similar.58* The larger a file is, the less benefit a dictionary will have. Generally,59* we don't expect dictionary compression to be effective past 100KB. And the60* smaller a file is, the more we would expect the dictionary to help.61*62* How do I use a dictionary?63* --------------------------64*65* Simply pass the dictionary to the zstd compressor with66* `ZSTD_CCtx_loadDictionary()`. The same dictionary must then be passed to67* the decompressor, using `ZSTD_DCtx_loadDictionary()`. There are other68* more advanced functions that allow selecting some options, see zstd.h for69* complete documentation.70*71* What is a zstd dictionary?72* --------------------------73*74* A zstd dictionary has two pieces: Its header, and its content. The header75* contains a magic number, the dictionary ID, and entropy tables. These76* entropy tables allow zstd to save on header costs in the compressed file,77* which really matters for small data. The content is just bytes, which are78* repeated content that is common across many samples.79*80* What is a raw content dictionary?81* ---------------------------------82*83* A raw content dictionary is just bytes. It doesn't have a zstd dictionary84* header, a dictionary ID, or entropy tables. Any buffer is a valid raw85* content dictionary.86*87* How do I train a dictionary?88* ----------------------------89*90* Gather samples from your use case. These samples should be similar to each91* other. If you have several use cases, you could try to train one dictionary92* per use case.93*94* Pass those samples to `ZDICT_trainFromBuffer()` and that will train your95* dictionary. There are a few advanced versions of this function, but this96* is a great starting point. If you want to further tune your dictionary97* you could try `ZDICT_optimizeTrainFromBuffer_cover()`. If that is too slow98* you can try `ZDICT_optimizeTrainFromBuffer_fastCover()`.99*100* If the dictionary training function fails, that is likely because you101* either passed too few samples, or a dictionary would not be effective102* for your data. Look at the messages that the dictionary trainer printed,103* if it doesn't say too few samples, then a dictionary would not be effective.104*105* How large should my dictionary be?106* ----------------------------------107*108* A reasonable dictionary size, the `dictBufferCapacity`, is about 100KB.109* The zstd CLI defaults to a 110KB dictionary. You likely don't need a110* dictionary larger than that. But, most use cases can get away with a111* smaller dictionary. The advanced dictionary builders can automatically112* shrink the dictionary for you, and select a the smallest size that113* doesn't hurt compression ratio too much. See the `shrinkDict` parameter.114* A smaller dictionary can save memory, and potentially speed up115* compression.116*117* How many samples should I provide to the dictionary builder?118* ------------------------------------------------------------119*120* We generally recommend passing ~100x the size of the dictionary121* in samples. A few thousand should suffice. Having too few samples122* can hurt the dictionaries effectiveness. Having more samples will123* only improve the dictionaries effectiveness. But having too many124* samples can slow down the dictionary builder.125*126* How do I determine if a dictionary will be effective?127* -----------------------------------------------------128*129* Simply train a dictionary and try it out. You can use zstd's built in130* benchmarking tool to test the dictionary effectiveness.131*132* # Benchmark levels 1-3 without a dictionary133* zstd -b1e3 -r /path/to/my/files134* # Benchmark levels 1-3 with a dictionary135* zstd -b1e3 -r /path/to/my/files -D /path/to/my/dictionary136*137* When should I retrain a dictionary?138* -----------------------------------139*140* You should retrain a dictionary when its effectiveness drops. Dictionary141* effectiveness drops as the data you are compressing changes. Generally, we do142* expect dictionaries to "decay" over time, as your data changes, but the rate143* at which they decay depends on your use case. Internally, we regularly144* retrain dictionaries, and if the new dictionary performs significantly145* better than the old dictionary, we will ship the new dictionary.146*147* I have a raw content dictionary, how do I turn it into a zstd dictionary?148* -------------------------------------------------------------------------149*150* If you have a raw content dictionary, e.g. by manually constructing it, or151* using a third-party dictionary builder, you can turn it into a zstd152* dictionary by using `ZDICT_finalizeDictionary()`. You'll also have to153* provide some samples of the data. It will add the zstd header to the154* raw content, which contains a dictionary ID and entropy tables, which155* will improve compression ratio, and allow zstd to write the dictionary ID156* into the frame, if you so choose.157*158* Do I have to use zstd's dictionary builder?159* -------------------------------------------160*161* No! You can construct dictionary content however you please, it is just162* bytes. It will always be valid as a raw content dictionary. If you want163* a zstd dictionary, which can improve compression ratio, use164* `ZDICT_finalizeDictionary()`.165*166* What is the attack surface of a zstd dictionary?167* ------------------------------------------------168*169* Zstd is heavily fuzz tested, including loading fuzzed dictionaries, so170* zstd should never crash, or access out-of-bounds memory no matter what171* the dictionary is. However, if an attacker can control the dictionary172* during decompression, they can cause zstd to generate arbitrary bytes,173* just like if they controlled the compressed data.174*175******************************************************************************/176177178/*! ZDICT_trainFromBuffer():179* Train a dictionary from an array of samples.180* Redirect towards ZDICT_optimizeTrainFromBuffer_fastCover() single-threaded, with d=8, steps=4,181* f=20, and accel=1.182* Samples must be stored concatenated in a single flat buffer `samplesBuffer`,183* supplied with an array of sizes `samplesSizes`, providing the size of each sample, in order.184* The resulting dictionary will be saved into `dictBuffer`.185* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)186* or an error code, which can be tested with ZDICT_isError().187* Note: Dictionary training will fail if there are not enough samples to construct a188* dictionary, or if most of the samples are too small (< 8 bytes being the lower limit).189* If dictionary training fails, you should use zstd without a dictionary, as the dictionary190* would've been ineffective anyways. If you believe your samples would benefit from a dictionary191* please open an issue with details, and we can look into it.192* Note: ZDICT_trainFromBuffer()'s memory usage is about 6 MB.193* Tips: In general, a reasonable dictionary has a size of ~ 100 KB.194* It's possible to select smaller or larger size, just by specifying `dictBufferCapacity`.195* In general, it's recommended to provide a few thousands samples, though this can vary a lot.196* It's recommended that total size of all samples be about ~x100 times the target size of dictionary.197*/198ZDICTLIB_API size_t ZDICT_trainFromBuffer(void* dictBuffer, size_t dictBufferCapacity,199const void* samplesBuffer,200const size_t* samplesSizes, unsigned nbSamples);201202typedef struct {203int compressionLevel; /*< optimize for a specific zstd compression level; 0 means default */204unsigned notificationLevel; /*< Write log to stderr; 0 = none (default); 1 = errors; 2 = progression; 3 = details; 4 = debug; */205unsigned dictID; /*< force dictID value; 0 means auto mode (32-bits random value)206* NOTE: The zstd format reserves some dictionary IDs for future use.207* You may use them in private settings, but be warned that they208* may be used by zstd in a public dictionary registry in the future.209* These dictionary IDs are:210* - low range : <= 32767211* - high range : >= (2^31)212*/213} ZDICT_params_t;214215/*! ZDICT_finalizeDictionary():216* Given a custom content as a basis for dictionary, and a set of samples,217* finalize dictionary by adding headers and statistics according to the zstd218* dictionary format.219*220* Samples must be stored concatenated in a flat buffer `samplesBuffer`,221* supplied with an array of sizes `samplesSizes`, providing the size of each222* sample in order. The samples are used to construct the statistics, so they223* should be representative of what you will compress with this dictionary.224*225* The compression level can be set in `parameters`. You should pass the226* compression level you expect to use in production. The statistics for each227* compression level differ, so tuning the dictionary for the compression level228* can help quite a bit.229*230* You can set an explicit dictionary ID in `parameters`, or allow us to pick231* a random dictionary ID for you, but we can't guarantee no collisions.232*233* The dstDictBuffer and the dictContent may overlap, and the content will be234* appended to the end of the header. If the header + the content doesn't fit in235* maxDictSize the beginning of the content is truncated to make room, since it236* is presumed that the most profitable content is at the end of the dictionary,237* since that is the cheapest to reference.238*239* `maxDictSize` must be >= max(dictContentSize, ZSTD_DICTSIZE_MIN).240*241* @return: size of dictionary stored into `dstDictBuffer` (<= `maxDictSize`),242* or an error code, which can be tested by ZDICT_isError().243* Note: ZDICT_finalizeDictionary() will push notifications into stderr if244* instructed to, using notificationLevel>0.245* NOTE: This function currently may fail in several edge cases including:246* * Not enough samples247* * Samples are uncompressible248* * Samples are all exactly the same249*/250ZDICTLIB_API size_t ZDICT_finalizeDictionary(void* dstDictBuffer, size_t maxDictSize,251const void* dictContent, size_t dictContentSize,252const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples,253ZDICT_params_t parameters);254255256/*====== Helper functions ======*/257ZDICTLIB_API unsigned ZDICT_getDictID(const void* dictBuffer, size_t dictSize); /**< extracts dictID; @return zero if error (not a valid dictionary) */258ZDICTLIB_API size_t ZDICT_getDictHeaderSize(const void* dictBuffer, size_t dictSize); /* returns dict header size; returns a ZSTD error code on failure */259ZDICTLIB_API unsigned ZDICT_isError(size_t errorCode);260ZDICTLIB_API const char* ZDICT_getErrorName(size_t errorCode);261262263264#ifdef ZDICT_STATIC_LINKING_ONLY265266/* ====================================================================================267* The definitions in this section are considered experimental.268* They should never be used with a dynamic library, as they may change in the future.269* They are provided for advanced usages.270* Use them only in association with static linking.271* ==================================================================================== */272273#define ZDICT_DICTSIZE_MIN 256274/* Deprecated: Remove in v1.6.0 */275#define ZDICT_CONTENTSIZE_MIN 128276277/*! ZDICT_cover_params_t:278* k and d are the only required parameters.279* For others, value 0 means default.280*/281typedef struct {282unsigned k; /* Segment size : constraint: 0 < k : Reasonable range [16, 2048+] */283unsigned d; /* dmer size : constraint: 0 < d <= k : Reasonable range [6, 16] */284unsigned steps; /* Number of steps : Only used for optimization : 0 means default (40) : Higher means more parameters checked */285unsigned nbThreads; /* Number of threads : constraint: 0 < nbThreads : 1 means single-threaded : Only used for optimization : Ignored if ZSTD_MULTITHREAD is not defined */286double splitPoint; /* Percentage of samples used for training: Only used for optimization : the first nbSamples * splitPoint samples will be used to training, the last nbSamples * (1 - splitPoint) samples will be used for testing, 0 means default (1.0), 1.0 when all samples are used for both training and testing */287unsigned shrinkDict; /* Train dictionaries to shrink in size starting from the minimum size and selects the smallest dictionary that is shrinkDictMaxRegression% worse than the largest dictionary. 0 means no shrinking and 1 means shrinking */288unsigned shrinkDictMaxRegression; /* Sets shrinkDictMaxRegression so that a smaller dictionary can be at worse shrinkDictMaxRegression% worse than the max dict size dictionary. */289ZDICT_params_t zParams;290} ZDICT_cover_params_t;291292typedef struct {293unsigned k; /* Segment size : constraint: 0 < k : Reasonable range [16, 2048+] */294unsigned d; /* dmer size : constraint: 0 < d <= k : Reasonable range [6, 16] */295unsigned f; /* log of size of frequency array : constraint: 0 < f <= 31 : 1 means default(20)*/296unsigned steps; /* Number of steps : Only used for optimization : 0 means default (40) : Higher means more parameters checked */297unsigned nbThreads; /* Number of threads : constraint: 0 < nbThreads : 1 means single-threaded : Only used for optimization : Ignored if ZSTD_MULTITHREAD is not defined */298double splitPoint; /* Percentage of samples used for training: Only used for optimization : the first nbSamples * splitPoint samples will be used to training, the last nbSamples * (1 - splitPoint) samples will be used for testing, 0 means default (0.75), 1.0 when all samples are used for both training and testing */299unsigned accel; /* Acceleration level: constraint: 0 < accel <= 10, higher means faster and less accurate, 0 means default(1) */300unsigned shrinkDict; /* Train dictionaries to shrink in size starting from the minimum size and selects the smallest dictionary that is shrinkDictMaxRegression% worse than the largest dictionary. 0 means no shrinking and 1 means shrinking */301unsigned shrinkDictMaxRegression; /* Sets shrinkDictMaxRegression so that a smaller dictionary can be at worse shrinkDictMaxRegression% worse than the max dict size dictionary. */302303ZDICT_params_t zParams;304} ZDICT_fastCover_params_t;305306/*! ZDICT_trainFromBuffer_cover():307* Train a dictionary from an array of samples using the COVER algorithm.308* Samples must be stored concatenated in a single flat buffer `samplesBuffer`,309* supplied with an array of sizes `samplesSizes`, providing the size of each sample, in order.310* The resulting dictionary will be saved into `dictBuffer`.311* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)312* or an error code, which can be tested with ZDICT_isError().313* See ZDICT_trainFromBuffer() for details on failure modes.314* Note: ZDICT_trainFromBuffer_cover() requires about 9 bytes of memory for each input byte.315* Tips: In general, a reasonable dictionary has a size of ~ 100 KB.316* It's possible to select smaller or larger size, just by specifying `dictBufferCapacity`.317* In general, it's recommended to provide a few thousands samples, though this can vary a lot.318* It's recommended that total size of all samples be about ~x100 times the target size of dictionary.319*/320ZDICTLIB_API size_t ZDICT_trainFromBuffer_cover(321void *dictBuffer, size_t dictBufferCapacity,322const void *samplesBuffer, const size_t *samplesSizes, unsigned nbSamples,323ZDICT_cover_params_t parameters);324325/*! ZDICT_optimizeTrainFromBuffer_cover():326* The same requirements as above hold for all the parameters except `parameters`.327* This function tries many parameter combinations and picks the best parameters.328* `*parameters` is filled with the best parameters found,329* dictionary constructed with those parameters is stored in `dictBuffer`.330*331* All of the parameters d, k, steps are optional.332* If d is non-zero then we don't check multiple values of d, otherwise we check d = {6, 8}.333* if steps is zero it defaults to its default value.334* If k is non-zero then we don't check multiple values of k, otherwise we check steps values in [50, 2000].335*336* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)337* or an error code, which can be tested with ZDICT_isError().338* On success `*parameters` contains the parameters selected.339* See ZDICT_trainFromBuffer() for details on failure modes.340* Note: ZDICT_optimizeTrainFromBuffer_cover() requires about 8 bytes of memory for each input byte and additionally another 5 bytes of memory for each byte of memory for each thread.341*/342ZDICTLIB_API size_t ZDICT_optimizeTrainFromBuffer_cover(343void* dictBuffer, size_t dictBufferCapacity,344const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples,345ZDICT_cover_params_t* parameters);346347/*! ZDICT_trainFromBuffer_fastCover():348* Train a dictionary from an array of samples using a modified version of COVER algorithm.349* Samples must be stored concatenated in a single flat buffer `samplesBuffer`,350* supplied with an array of sizes `samplesSizes`, providing the size of each sample, in order.351* d and k are required.352* All other parameters are optional, will use default values if not provided353* The resulting dictionary will be saved into `dictBuffer`.354* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)355* or an error code, which can be tested with ZDICT_isError().356* See ZDICT_trainFromBuffer() for details on failure modes.357* Note: ZDICT_trainFromBuffer_fastCover() requires 6 * 2^f bytes of memory.358* Tips: In general, a reasonable dictionary has a size of ~ 100 KB.359* It's possible to select smaller or larger size, just by specifying `dictBufferCapacity`.360* In general, it's recommended to provide a few thousands samples, though this can vary a lot.361* It's recommended that total size of all samples be about ~x100 times the target size of dictionary.362*/363ZDICTLIB_API size_t ZDICT_trainFromBuffer_fastCover(void *dictBuffer,364size_t dictBufferCapacity, const void *samplesBuffer,365const size_t *samplesSizes, unsigned nbSamples,366ZDICT_fastCover_params_t parameters);367368/*! ZDICT_optimizeTrainFromBuffer_fastCover():369* The same requirements as above hold for all the parameters except `parameters`.370* This function tries many parameter combinations (specifically, k and d combinations)371* and picks the best parameters. `*parameters` is filled with the best parameters found,372* dictionary constructed with those parameters is stored in `dictBuffer`.373* All of the parameters d, k, steps, f, and accel are optional.374* If d is non-zero then we don't check multiple values of d, otherwise we check d = {6, 8}.375* if steps is zero it defaults to its default value.376* If k is non-zero then we don't check multiple values of k, otherwise we check steps values in [50, 2000].377* If f is zero, default value of 20 is used.378* If accel is zero, default value of 1 is used.379*380* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)381* or an error code, which can be tested with ZDICT_isError().382* On success `*parameters` contains the parameters selected.383* See ZDICT_trainFromBuffer() for details on failure modes.384* Note: ZDICT_optimizeTrainFromBuffer_fastCover() requires about 6 * 2^f bytes of memory for each thread.385*/386ZDICTLIB_API size_t ZDICT_optimizeTrainFromBuffer_fastCover(void* dictBuffer,387size_t dictBufferCapacity, const void* samplesBuffer,388const size_t* samplesSizes, unsigned nbSamples,389ZDICT_fastCover_params_t* parameters);390391typedef struct {392unsigned selectivityLevel; /* 0 means default; larger => select more => larger dictionary */393ZDICT_params_t zParams;394} ZDICT_legacy_params_t;395396/*! ZDICT_trainFromBuffer_legacy():397* Train a dictionary from an array of samples.398* Samples must be stored concatenated in a single flat buffer `samplesBuffer`,399* supplied with an array of sizes `samplesSizes`, providing the size of each sample, in order.400* The resulting dictionary will be saved into `dictBuffer`.401* `parameters` is optional and can be provided with values set to 0 to mean "default".402* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)403* or an error code, which can be tested with ZDICT_isError().404* See ZDICT_trainFromBuffer() for details on failure modes.405* Tips: In general, a reasonable dictionary has a size of ~ 100 KB.406* It's possible to select smaller or larger size, just by specifying `dictBufferCapacity`.407* In general, it's recommended to provide a few thousands samples, though this can vary a lot.408* It's recommended that total size of all samples be about ~x100 times the target size of dictionary.409* Note: ZDICT_trainFromBuffer_legacy() will send notifications into stderr if instructed to, using notificationLevel>0.410*/411ZDICTLIB_API size_t ZDICT_trainFromBuffer_legacy(412void* dictBuffer, size_t dictBufferCapacity,413const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples,414ZDICT_legacy_params_t parameters);415416417/* Deprecation warnings */418/* It is generally possible to disable deprecation warnings from compiler,419for example with -Wno-deprecated-declarations for gcc420or _CRT_SECURE_NO_WARNINGS in Visual.421Otherwise, it's also possible to manually define ZDICT_DISABLE_DEPRECATE_WARNINGS */422#ifdef ZDICT_DISABLE_DEPRECATE_WARNINGS423# define ZDICT_DEPRECATED(message) ZDICTLIB_API /* disable deprecation warnings */424#else425# define ZDICT_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)426# if defined (__cplusplus) && (__cplusplus >= 201402) /* C++14 or greater */427# define ZDICT_DEPRECATED(message) [[deprecated(message)]] ZDICTLIB_API428# elif defined(__clang__) || (ZDICT_GCC_VERSION >= 405)429# define ZDICT_DEPRECATED(message) ZDICTLIB_API __attribute__((deprecated(message)))430# elif (ZDICT_GCC_VERSION >= 301)431# define ZDICT_DEPRECATED(message) ZDICTLIB_API __attribute__((deprecated))432# elif defined(_MSC_VER)433# define ZDICT_DEPRECATED(message) ZDICTLIB_API __declspec(deprecated(message))434# else435# pragma message("WARNING: You need to implement ZDICT_DEPRECATED for this compiler")436# define ZDICT_DEPRECATED(message) ZDICTLIB_API437# endif438#endif /* ZDICT_DISABLE_DEPRECATE_WARNINGS */439440ZDICT_DEPRECATED("use ZDICT_finalizeDictionary() instead")441size_t ZDICT_addEntropyTablesFromBuffer(void* dictBuffer, size_t dictContentSize, size_t dictBufferCapacity,442const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples);443444445#endif /* ZDICT_STATIC_LINKING_ONLY */446447#if defined (__cplusplus)448}449#endif450451#endif /* DICTBUILDER_H_001 */452453454