Path: blob/master/dep/ffmpeg/include/libavformat/avformat.h
4216 views
/*1* copyright (c) 2001 Fabrice Bellard2*3* This file is part of FFmpeg.4*5* FFmpeg is free software; you can redistribute it and/or6* modify it under the terms of the GNU Lesser General Public7* License as published by the Free Software Foundation; either8* version 2.1 of the License, or (at your option) any later version.9*10* FFmpeg is distributed in the hope that it will be useful,11* but WITHOUT ANY WARRANTY; without even the implied warranty of12* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU13* Lesser General Public License for more details.14*15* You should have received a copy of the GNU Lesser General Public16* License along with FFmpeg; if not, write to the Free Software17* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA18*/1920#ifndef AVFORMAT_AVFORMAT_H21#define AVFORMAT_AVFORMAT_H2223/**24* @file25* @ingroup libavf26* Main libavformat public API header27*/2829/**30* @defgroup libavf libavformat31* I/O and Muxing/Demuxing Library32*33* Libavformat (lavf) is a library for dealing with various media container34* formats. Its main two purposes are demuxing - i.e. splitting a media file35* into component streams, and the reverse process of muxing - writing supplied36* data in a specified container format. It also has an @ref lavf_io37* "I/O module" which supports a number of protocols for accessing the data (e.g.38* file, tcp, http and others).39* Unless you are absolutely sure you won't use libavformat's network40* capabilities, you should also call avformat_network_init().41*42* A supported input format is described by an AVInputFormat struct, conversely43* an output format is described by AVOutputFormat. You can iterate over all44* input/output formats using the av_demuxer_iterate / av_muxer_iterate() functions.45* The protocols layer is not part of the public API, so you can only get the names46* of supported protocols with the avio_enum_protocols() function.47*48* Main lavf structure used for both muxing and demuxing is AVFormatContext,49* which exports all information about the file being read or written. As with50* most Libavformat structures, its size is not part of public ABI, so it cannot be51* allocated on stack or directly with av_malloc(). To create an52* AVFormatContext, use avformat_alloc_context() (some functions, like53* avformat_open_input() might do that for you).54*55* Most importantly an AVFormatContext contains:56* @li the @ref AVFormatContext.iformat "input" or @ref AVFormatContext.oformat57* "output" format. It is either autodetected or set by user for input;58* always set by user for output.59* @li an @ref AVFormatContext.streams "array" of AVStreams, which describe all60* elementary streams stored in the file. AVStreams are typically referred to61* using their index in this array.62* @li an @ref AVFormatContext.pb "I/O context". It is either opened by lavf or63* set by user for input, always set by user for output (unless you are dealing64* with an AVFMT_NOFILE format).65*66* @section lavf_options Passing options to (de)muxers67* It is possible to configure lavf muxers and demuxers using the @ref avoptions68* mechanism. Generic (format-independent) libavformat options are provided by69* AVFormatContext, they can be examined from a user program by calling70* av_opt_next() / av_opt_find() on an allocated AVFormatContext (or its AVClass71* from avformat_get_class()). Private (format-specific) options are provided by72* AVFormatContext.priv_data if and only if AVInputFormat.priv_class /73* AVOutputFormat.priv_class of the corresponding format struct is non-NULL.74* Further options may be provided by the @ref AVFormatContext.pb "I/O context",75* if its AVClass is non-NULL, and the protocols layer. See the discussion on76* nesting in @ref avoptions documentation to learn how to access those.77*78* @section urls79* URL strings in libavformat are made of a scheme/protocol, a ':', and a80* scheme specific string. URLs without a scheme and ':' used for local files81* are supported but deprecated. "file:" should be used for local files.82*83* It is important that the scheme string is not taken from untrusted84* sources without checks.85*86* Note that some schemes/protocols are quite powerful, allowing access to87* both local and remote files, parts of them, concatenations of them, local88* audio and video devices and so on.89*90* @{91*92* @defgroup lavf_decoding Demuxing93* @{94* Demuxers read a media file and split it into chunks of data (@em packets). A95* @ref AVPacket "packet" contains one or more encoded frames which belongs to a96* single elementary stream. In the lavf API this process is represented by the97* avformat_open_input() function for opening a file, av_read_frame() for98* reading a single packet and finally avformat_close_input(), which does the99* cleanup.100*101* @section lavf_decoding_open Opening a media file102* The minimum information required to open a file is its URL, which103* is passed to avformat_open_input(), as in the following code:104* @code105* const char *url = "file:in.mp3";106* AVFormatContext *s = NULL;107* int ret = avformat_open_input(&s, url, NULL, NULL);108* if (ret < 0)109* abort();110* @endcode111* The above code attempts to allocate an AVFormatContext, open the112* specified file (autodetecting the format) and read the header, exporting the113* information stored there into s. Some formats do not have a header or do not114* store enough information there, so it is recommended that you call the115* avformat_find_stream_info() function which tries to read and decode a few116* frames to find missing information.117*118* In some cases you might want to preallocate an AVFormatContext yourself with119* avformat_alloc_context() and do some tweaking on it before passing it to120* avformat_open_input(). One such case is when you want to use custom functions121* for reading input data instead of lavf internal I/O layer.122* To do that, create your own AVIOContext with avio_alloc_context(), passing123* your reading callbacks to it. Then set the @em pb field of your124* AVFormatContext to newly created AVIOContext.125*126* Since the format of the opened file is in general not known until after127* avformat_open_input() has returned, it is not possible to set demuxer private128* options on a preallocated context. Instead, the options should be passed to129* avformat_open_input() wrapped in an AVDictionary:130* @code131* AVDictionary *options = NULL;132* av_dict_set(&options, "video_size", "640x480", 0);133* av_dict_set(&options, "pixel_format", "rgb24", 0);134*135* if (avformat_open_input(&s, url, NULL, &options) < 0)136* abort();137* av_dict_free(&options);138* @endcode139* This code passes the private options 'video_size' and 'pixel_format' to the140* demuxer. They would be necessary for e.g. the rawvideo demuxer, since it141* cannot know how to interpret raw video data otherwise. If the format turns142* out to be something different than raw video, those options will not be143* recognized by the demuxer and therefore will not be applied. Such unrecognized144* options are then returned in the options dictionary (recognized options are145* consumed). The calling program can handle such unrecognized options as it146* wishes, e.g.147* @code148* AVDictionaryEntry *e;149* if (e = av_dict_get(options, "", NULL, AV_DICT_IGNORE_SUFFIX)) {150* fprintf(stderr, "Option %s not recognized by the demuxer.\n", e->key);151* abort();152* }153* @endcode154*155* After you have finished reading the file, you must close it with156* avformat_close_input(). It will free everything associated with the file.157*158* @section lavf_decoding_read Reading from an opened file159* Reading data from an opened AVFormatContext is done by repeatedly calling160* av_read_frame() on it. Each call, if successful, will return an AVPacket161* containing encoded data for one AVStream, identified by162* AVPacket.stream_index. This packet may be passed straight into the libavcodec163* decoding functions avcodec_send_packet() or avcodec_decode_subtitle2() if the164* caller wishes to decode the data.165*166* AVPacket.pts, AVPacket.dts and AVPacket.duration timing information will be167* set if known. They may also be unset (i.e. AV_NOPTS_VALUE for168* pts/dts, 0 for duration) if the stream does not provide them. The timing169* information will be in AVStream.time_base units, i.e. it has to be170* multiplied by the timebase to convert them to seconds.171*172* A packet returned by av_read_frame() is always reference-counted,173* i.e. AVPacket.buf is set and the user may keep it indefinitely.174* The packet must be freed with av_packet_unref() when it is no175* longer needed.176*177* @section lavf_decoding_seek Seeking178* @}179*180* @defgroup lavf_encoding Muxing181* @{182* Muxers take encoded data in the form of @ref AVPacket "AVPackets" and write183* it into files or other output bytestreams in the specified container format.184*185* The main API functions for muxing are avformat_write_header() for writing the186* file header, av_write_frame() / av_interleaved_write_frame() for writing the187* packets and av_write_trailer() for finalizing the file.188*189* At the beginning of the muxing process, the caller must first call190* avformat_alloc_context() to create a muxing context. The caller then sets up191* the muxer by filling the various fields in this context:192*193* - The @ref AVFormatContext.oformat "oformat" field must be set to select the194* muxer that will be used.195* - Unless the format is of the AVFMT_NOFILE type, the @ref AVFormatContext.pb196* "pb" field must be set to an opened IO context, either returned from197* avio_open2() or a custom one.198* - Unless the format is of the AVFMT_NOSTREAMS type, at least one stream must199* be created with the avformat_new_stream() function. The caller should fill200* the @ref AVStream.codecpar "stream codec parameters" information, such as the201* codec @ref AVCodecParameters.codec_type "type", @ref AVCodecParameters.codec_id202* "id" and other parameters (e.g. width / height, the pixel or sample format,203* etc.) as known. The @ref AVStream.time_base "stream timebase" should204* be set to the timebase that the caller desires to use for this stream (note205* that the timebase actually used by the muxer can be different, as will be206* described later).207* - It is advised to manually initialize only the relevant fields in208* AVCodecParameters, rather than using @ref avcodec_parameters_copy() during209* remuxing: there is no guarantee that the codec context values remain valid210* for both input and output format contexts.211* - The caller may fill in additional information, such as @ref212* AVFormatContext.metadata "global" or @ref AVStream.metadata "per-stream"213* metadata, @ref AVFormatContext.chapters "chapters", @ref214* AVFormatContext.programs "programs", etc. as described in the215* AVFormatContext documentation. Whether such information will actually be216* stored in the output depends on what the container format and the muxer217* support.218*219* When the muxing context is fully set up, the caller must call220* avformat_write_header() to initialize the muxer internals and write the file221* header. Whether anything actually is written to the IO context at this step222* depends on the muxer, but this function must always be called. Any muxer223* private options must be passed in the options parameter to this function.224*225* The data is then sent to the muxer by repeatedly calling av_write_frame() or226* av_interleaved_write_frame() (consult those functions' documentation for227* discussion on the difference between them; only one of them may be used with228* a single muxing context, they should not be mixed). Do note that the timing229* information on the packets sent to the muxer must be in the corresponding230* AVStream's timebase. That timebase is set by the muxer (in the231* avformat_write_header() step) and may be different from the timebase232* requested by the caller.233*234* Once all the data has been written, the caller must call av_write_trailer()235* to flush any buffered packets and finalize the output file, then close the IO236* context (if any) and finally free the muxing context with237* avformat_free_context().238* @}239*240* @defgroup lavf_io I/O Read/Write241* @{242* @section lavf_io_dirlist Directory listing243* The directory listing API makes it possible to list files on remote servers.244*245* Some of possible use cases:246* - an "open file" dialog to choose files from a remote location,247* - a recursive media finder providing a player with an ability to play all248* files from a given directory.249*250* @subsection lavf_io_dirlist_open Opening a directory251* At first, a directory needs to be opened by calling avio_open_dir()252* supplied with a URL and, optionally, ::AVDictionary containing253* protocol-specific parameters. The function returns zero or positive254* integer and allocates AVIODirContext on success.255*256* @code257* AVIODirContext *ctx = NULL;258* if (avio_open_dir(&ctx, "smb://example.com/some_dir", NULL) < 0) {259* fprintf(stderr, "Cannot open directory.\n");260* abort();261* }262* @endcode263*264* This code tries to open a sample directory using smb protocol without265* any additional parameters.266*267* @subsection lavf_io_dirlist_read Reading entries268* Each directory's entry (i.e. file, another directory, anything else269* within ::AVIODirEntryType) is represented by AVIODirEntry.270* Reading consecutive entries from an opened AVIODirContext is done by271* repeatedly calling avio_read_dir() on it. Each call returns zero or272* positive integer if successful. Reading can be stopped right after the273* NULL entry has been read -- it means there are no entries left to be274* read. The following code reads all entries from a directory associated275* with ctx and prints their names to standard output.276* @code277* AVIODirEntry *entry = NULL;278* for (;;) {279* if (avio_read_dir(ctx, &entry) < 0) {280* fprintf(stderr, "Cannot list directory.\n");281* abort();282* }283* if (!entry)284* break;285* printf("%s\n", entry->name);286* avio_free_directory_entry(&entry);287* }288* @endcode289* @}290*291* @defgroup lavf_codec Demuxers292* @{293* @defgroup lavf_codec_native Native Demuxers294* @{295* @}296* @defgroup lavf_codec_wrappers External library wrappers297* @{298* @}299* @}300* @defgroup lavf_protos I/O Protocols301* @{302* @}303* @defgroup lavf_internal Internal304* @{305* @}306* @}307*/308309#include <stdio.h> /* FILE */310311#include "libavcodec/codec_par.h"312#include "libavcodec/defs.h"313#include "libavcodec/packet.h"314315#include "libavutil/dict.h"316#include "libavutil/log.h"317318#include "avio.h"319#include "libavformat/version_major.h"320#ifndef HAVE_AV_CONFIG_H321/* When included as part of the ffmpeg build, only include the major version322* to avoid unnecessary rebuilds. When included externally, keep including323* the full version information. */324#include "libavformat/version.h"325326#include "libavutil/frame.h"327#include "libavcodec/codec.h"328#endif329330struct AVFormatContext;331struct AVFrame;332333/**334* @defgroup metadata_api Public Metadata API335* @{336* @ingroup libavf337* The metadata API allows libavformat to export metadata tags to a client338* application when demuxing. Conversely it allows a client application to339* set metadata when muxing.340*341* Metadata is exported or set as pairs of key/value strings in the 'metadata'342* fields of the AVFormatContext, AVStream, AVChapter and AVProgram structs343* using the @ref lavu_dict "AVDictionary" API. Like all strings in FFmpeg,344* metadata is assumed to be UTF-8 encoded Unicode. Note that metadata345* exported by demuxers isn't checked to be valid UTF-8 in most cases.346*347* Important concepts to keep in mind:348* - Keys are unique; there can never be 2 tags with the same key. This is349* also meant semantically, i.e., a demuxer should not knowingly produce350* several keys that are literally different but semantically identical.351* E.g., key=Author5, key=Author6. In this example, all authors must be352* placed in the same tag.353* - Metadata is flat, not hierarchical; there are no subtags. If you354* want to store, e.g., the email address of the child of producer Alice355* and actor Bob, that could have key=alice_and_bobs_childs_email_address.356* - Several modifiers can be applied to the tag name. This is done by357* appending a dash character ('-') and the modifier name in the order358* they appear in the list below -- e.g. foo-eng-sort, not foo-sort-eng.359* - language -- a tag whose value is localized for a particular language360* is appended with the ISO 639-2/B 3-letter language code.361* For example: Author-ger=Michael, Author-eng=Mike362* The original/default language is in the unqualified "Author" tag.363* A demuxer should set a default if it sets any translated tag.364* - sorting -- a modified version of a tag that should be used for365* sorting will have '-sort' appended. E.g. artist="The Beatles",366* artist-sort="Beatles, The".367* - Some protocols and demuxers support metadata updates. After a successful368* call to av_read_frame(), AVFormatContext.event_flags or AVStream.event_flags369* will be updated to indicate if metadata changed. In order to detect metadata370* changes on a stream, you need to loop through all streams in the AVFormatContext371* and check their individual event_flags.372*373* - Demuxers attempt to export metadata in a generic format, however tags374* with no generic equivalents are left as they are stored in the container.375* Follows a list of generic tag names:376*377@verbatim378album -- name of the set this work belongs to379album_artist -- main creator of the set/album, if different from artist.380e.g. "Various Artists" for compilation albums.381artist -- main creator of the work382comment -- any additional description of the file.383composer -- who composed the work, if different from artist.384copyright -- name of copyright holder.385creation_time-- date when the file was created, preferably in ISO 8601.386date -- date when the work was created, preferably in ISO 8601.387disc -- number of a subset, e.g. disc in a multi-disc collection.388encoder -- name/settings of the software/hardware that produced the file.389encoded_by -- person/group who created the file.390filename -- original name of the file.391genre -- <self-evident>.392language -- main language in which the work is performed, preferably393in ISO 639-2 format. Multiple languages can be specified by394separating them with commas.395performer -- artist who performed the work, if different from artist.396E.g for "Also sprach Zarathustra", artist would be "Richard397Strauss" and performer "London Philharmonic Orchestra".398publisher -- name of the label/publisher.399service_name -- name of the service in broadcasting (channel name).400service_provider -- name of the service provider in broadcasting.401title -- name of the work.402track -- number of this work in the set, can be in form current/total.403variant_bitrate -- the total bitrate of the bitrate variant that the current stream is part of404@endverbatim405*406* Look in the examples section for an application example how to use the Metadata API.407*408* @}409*/410411/* packet functions */412413414/**415* Allocate and read the payload of a packet and initialize its416* fields with default values.417*418* @param s associated IO context419* @param pkt packet420* @param size desired payload size421* @return >0 (read size) if OK, AVERROR_xxx otherwise422*/423int av_get_packet(AVIOContext *s, AVPacket *pkt, int size);424425426/**427* Read data and append it to the current content of the AVPacket.428* If pkt->size is 0 this is identical to av_get_packet.429* Note that this uses av_grow_packet and thus involves a realloc430* which is inefficient. Thus this function should only be used431* when there is no reasonable way to know (an upper bound of)432* the final size.433*434* @param s associated IO context435* @param pkt packet436* @param size amount of data to read437* @return >0 (read size) if OK, AVERROR_xxx otherwise, previous data438* will not be lost even if an error occurs.439*/440int av_append_packet(AVIOContext *s, AVPacket *pkt, int size);441442/*************************************************/443/* input/output formats */444445struct AVCodecTag;446447/**448* This structure contains the data a format has to probe a file.449*/450typedef struct AVProbeData {451const char *filename;452unsigned char *buf; /**< Buffer must have AVPROBE_PADDING_SIZE of extra allocated bytes filled with zero. */453int buf_size; /**< Size of buf except extra allocated bytes */454const char *mime_type; /**< mime_type, when known. */455} AVProbeData;456457#define AVPROBE_SCORE_RETRY (AVPROBE_SCORE_MAX/4)458#define AVPROBE_SCORE_STREAM_RETRY (AVPROBE_SCORE_MAX/4-1)459460#define AVPROBE_SCORE_EXTENSION 50 ///< score for file extension461#define AVPROBE_SCORE_MIME 75 ///< score for file mime type462#define AVPROBE_SCORE_MAX 100 ///< maximum score463464#define AVPROBE_PADDING_SIZE 32 ///< extra allocated bytes at the end of the probe buffer465466/// Demuxer will use avio_open, no opened file should be provided by the caller.467#define AVFMT_NOFILE 0x0001468#define AVFMT_NEEDNUMBER 0x0002 /**< Needs '%d' in filename. */469/**470* The muxer/demuxer is experimental and should be used with caution.471*472* - demuxers: will not be selected automatically by probing, must be specified473* explicitly.474*/475#define AVFMT_EXPERIMENTAL 0x0004476#define AVFMT_SHOW_IDS 0x0008 /**< Show format stream IDs numbers. */477#define AVFMT_GLOBALHEADER 0x0040 /**< Format wants global header. */478#define AVFMT_NOTIMESTAMPS 0x0080 /**< Format does not need / have any timestamps. */479#define AVFMT_GENERIC_INDEX 0x0100 /**< Use generic index building code. */480#define AVFMT_TS_DISCONT 0x0200 /**< Format allows timestamp discontinuities. Note, muxers always require valid (monotone) timestamps */481#define AVFMT_VARIABLE_FPS 0x0400 /**< Format allows variable fps. */482#define AVFMT_NODIMENSIONS 0x0800 /**< Format does not need width/height */483#define AVFMT_NOSTREAMS 0x1000 /**< Format does not require any streams */484#define AVFMT_NOBINSEARCH 0x2000 /**< Format does not allow to fall back on binary search via read_timestamp */485#define AVFMT_NOGENSEARCH 0x4000 /**< Format does not allow to fall back on generic search */486#define AVFMT_NO_BYTE_SEEK 0x8000 /**< Format does not allow seeking by bytes */487#if FF_API_ALLOW_FLUSH488#define AVFMT_ALLOW_FLUSH 0x10000 /**< @deprecated: Just send a NULL packet if you want to flush a muxer. */489#endif490#define AVFMT_TS_NONSTRICT 0x20000 /**< Format does not require strictly491increasing timestamps, but they must492still be monotonic */493#define AVFMT_TS_NEGATIVE 0x40000 /**< Format allows muxing negative494timestamps. If not set the timestamp495will be shifted in av_write_frame and496av_interleaved_write_frame so they497start from 0.498The user or muxer can override this through499AVFormatContext.avoid_negative_ts500*/501502#define AVFMT_SEEK_TO_PTS 0x4000000 /**< Seeking is based on PTS */503504/**505* @addtogroup lavf_encoding506* @{507*/508typedef struct AVOutputFormat {509const char *name;510/**511* Descriptive name for the format, meant to be more human-readable512* than name. You should use the NULL_IF_CONFIG_SMALL() macro513* to define it.514*/515const char *long_name;516const char *mime_type;517const char *extensions; /**< comma-separated filename extensions */518/* output support */519enum AVCodecID audio_codec; /**< default audio codec */520enum AVCodecID video_codec; /**< default video codec */521enum AVCodecID subtitle_codec; /**< default subtitle codec */522/**523* can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER,524* AVFMT_GLOBALHEADER, AVFMT_NOTIMESTAMPS, AVFMT_VARIABLE_FPS,525* AVFMT_NODIMENSIONS, AVFMT_NOSTREAMS,526* AVFMT_TS_NONSTRICT, AVFMT_TS_NEGATIVE527*/528int flags;529530/**531* List of supported codec_id-codec_tag pairs, ordered by "better532* choice first". The arrays are all terminated by AV_CODEC_ID_NONE.533*/534const struct AVCodecTag * const *codec_tag;535536537const AVClass *priv_class; ///< AVClass for the private context538} AVOutputFormat;539/**540* @}541*/542543/**544* @addtogroup lavf_decoding545* @{546*/547typedef struct AVInputFormat {548/**549* A comma separated list of short names for the format. New names550* may be appended with a minor bump.551*/552const char *name;553554/**555* Descriptive name for the format, meant to be more human-readable556* than name. You should use the NULL_IF_CONFIG_SMALL() macro557* to define it.558*/559const char *long_name;560561/**562* Can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER, AVFMT_SHOW_IDS,563* AVFMT_NOTIMESTAMPS, AVFMT_GENERIC_INDEX, AVFMT_TS_DISCONT, AVFMT_NOBINSEARCH,564* AVFMT_NOGENSEARCH, AVFMT_NO_BYTE_SEEK, AVFMT_SEEK_TO_PTS.565*/566int flags;567568/**569* If extensions are defined, then no probe is done. You should570* usually not use extension format guessing because it is not571* reliable enough572*/573const char *extensions;574575const struct AVCodecTag * const *codec_tag;576577const AVClass *priv_class; ///< AVClass for the private context578579/**580* Comma-separated list of mime types.581* It is used check for matching mime types while probing.582* @see av_probe_input_format2583*/584const char *mime_type;585} AVInputFormat;586/**587* @}588*/589590enum AVStreamParseType {591AVSTREAM_PARSE_NONE,592AVSTREAM_PARSE_FULL, /**< full parsing and repack */593AVSTREAM_PARSE_HEADERS, /**< Only parse headers, do not repack. */594AVSTREAM_PARSE_TIMESTAMPS, /**< full parsing and interpolation of timestamps for frames not starting on a packet boundary */595AVSTREAM_PARSE_FULL_ONCE, /**< full parsing and repack of the first frame only, only implemented for H.264 currently */596AVSTREAM_PARSE_FULL_RAW, /**< full parsing and repack with timestamp and position generation by parser for raw597this assumes that each packet in the file contains no demuxer level headers and598just codec level data, otherwise position generation would fail */599};600601typedef struct AVIndexEntry {602int64_t pos;603int64_t timestamp; /**<604* Timestamp in AVStream.time_base units, preferably the time from which on correctly decoded frames are available605* when seeking to this entry. That means preferable PTS on keyframe based formats.606* But demuxers can choose to store a different timestamp, if it is more convenient for the implementation or nothing better607* is known608*/609#define AVINDEX_KEYFRAME 0x0001610#define AVINDEX_DISCARD_FRAME 0x0002 /**611* Flag is used to indicate which frame should be discarded after decoding.612*/613int flags:2;614int size:30; //Yeah, trying to keep the size of this small to reduce memory requirements (it is 24 vs. 32 bytes due to possible 8-byte alignment).615int min_distance; /**< Minimum distance between this and the previous keyframe, used to avoid unneeded searching. */616} AVIndexEntry;617618/**619* The stream should be chosen by default among other streams of the same type,620* unless the user has explicitly specified otherwise.621*/622#define AV_DISPOSITION_DEFAULT (1 << 0)623/**624* The stream is not in original language.625*626* @note AV_DISPOSITION_ORIGINAL is the inverse of this disposition. At most627* one of them should be set in properly tagged streams.628* @note This disposition may apply to any stream type, not just audio.629*/630#define AV_DISPOSITION_DUB (1 << 1)631/**632* The stream is in original language.633*634* @see the notes for AV_DISPOSITION_DUB635*/636#define AV_DISPOSITION_ORIGINAL (1 << 2)637/**638* The stream is a commentary track.639*/640#define AV_DISPOSITION_COMMENT (1 << 3)641/**642* The stream contains song lyrics.643*/644#define AV_DISPOSITION_LYRICS (1 << 4)645/**646* The stream contains karaoke audio.647*/648#define AV_DISPOSITION_KARAOKE (1 << 5)649650/**651* Track should be used during playback by default.652* Useful for subtitle track that should be displayed653* even when user did not explicitly ask for subtitles.654*/655#define AV_DISPOSITION_FORCED (1 << 6)656/**657* The stream is intended for hearing impaired audiences.658*/659#define AV_DISPOSITION_HEARING_IMPAIRED (1 << 7)660/**661* The stream is intended for visually impaired audiences.662*/663#define AV_DISPOSITION_VISUAL_IMPAIRED (1 << 8)664/**665* The audio stream contains music and sound effects without voice.666*/667#define AV_DISPOSITION_CLEAN_EFFECTS (1 << 9)668/**669* The stream is stored in the file as an attached picture/"cover art" (e.g.670* APIC frame in ID3v2). The first (usually only) packet associated with it671* will be returned among the first few packets read from the file unless672* seeking takes place. It can also be accessed at any time in673* AVStream.attached_pic.674*/675#define AV_DISPOSITION_ATTACHED_PIC (1 << 10)676/**677* The stream is sparse, and contains thumbnail images, often corresponding678* to chapter markers. Only ever used with AV_DISPOSITION_ATTACHED_PIC.679*/680#define AV_DISPOSITION_TIMED_THUMBNAILS (1 << 11)681682/**683* The stream is intended to be mixed with a spatial audio track. For example,684* it could be used for narration or stereo music, and may remain unchanged by685* listener head rotation.686*/687#define AV_DISPOSITION_NON_DIEGETIC (1 << 12)688689/**690* The subtitle stream contains captions, providing a transcription and possibly691* a translation of audio. Typically intended for hearing-impaired audiences.692*/693#define AV_DISPOSITION_CAPTIONS (1 << 16)694/**695* The subtitle stream contains a textual description of the video content.696* Typically intended for visually-impaired audiences or for the cases where the697* video cannot be seen.698*/699#define AV_DISPOSITION_DESCRIPTIONS (1 << 17)700/**701* The subtitle stream contains time-aligned metadata that is not intended to be702* directly presented to the user.703*/704#define AV_DISPOSITION_METADATA (1 << 18)705/**706* The stream is intended to be mixed with another stream before presentation.707* Used for example to signal the stream contains an image part of a HEIF grid,708* or for mix_type=0 in mpegts.709*/710#define AV_DISPOSITION_DEPENDENT (1 << 19)711/**712* The video stream contains still images.713*/714#define AV_DISPOSITION_STILL_IMAGE (1 << 20)715/**716* The video stream contains multiple layers, e.g. stereoscopic views (cf. H.264717* Annex G/H, or HEVC Annex F).718*/719#define AV_DISPOSITION_MULTILAYER (1 << 21)720721/**722* @return The AV_DISPOSITION_* flag corresponding to disp or a negative error723* code if disp does not correspond to a known stream disposition.724*/725int av_disposition_from_string(const char *disp);726727/**728* @param disposition a combination of AV_DISPOSITION_* values729* @return The string description corresponding to the lowest set bit in730* disposition. NULL when the lowest set bit does not correspond731* to a known disposition or when disposition is 0.732*/733const char *av_disposition_to_string(int disposition);734735/**736* Options for behavior on timestamp wrap detection.737*/738#define AV_PTS_WRAP_IGNORE 0 ///< ignore the wrap739#define AV_PTS_WRAP_ADD_OFFSET 1 ///< add the format specific offset on wrap detection740#define AV_PTS_WRAP_SUB_OFFSET -1 ///< subtract the format specific offset on wrap detection741742/**743* Stream structure.744* New fields can be added to the end with minor version bumps.745* Removal, reordering and changes to existing fields require a major746* version bump.747* sizeof(AVStream) must not be used outside libav*.748*/749typedef struct AVStream {750/**751* A class for @ref avoptions. Set on stream creation.752*/753const AVClass *av_class;754755int index; /**< stream index in AVFormatContext */756/**757* Format-specific stream ID.758* decoding: set by libavformat759* encoding: set by the user, replaced by libavformat if left unset760*/761int id;762763/**764* Codec parameters associated with this stream. Allocated and freed by765* libavformat in avformat_new_stream() and avformat_free_context()766* respectively.767*768* - demuxing: filled by libavformat on stream creation or in769* avformat_find_stream_info()770* - muxing: filled by the caller before avformat_write_header()771*/772AVCodecParameters *codecpar;773774void *priv_data;775776/**777* This is the fundamental unit of time (in seconds) in terms778* of which frame timestamps are represented.779*780* decoding: set by libavformat781* encoding: May be set by the caller before avformat_write_header() to782* provide a hint to the muxer about the desired timebase. In783* avformat_write_header(), the muxer will overwrite this field784* with the timebase that will actually be used for the timestamps785* written into the file (which may or may not be related to the786* user-provided one, depending on the format).787*/788AVRational time_base;789790/**791* Decoding: pts of the first frame of the stream in presentation order, in stream time base.792* Only set this if you are absolutely 100% sure that the value you set793* it to really is the pts of the first frame.794* This may be undefined (AV_NOPTS_VALUE).795* @note The ASF header does NOT contain a correct start_time the ASF796* demuxer must NOT set this.797*/798int64_t start_time;799800/**801* Decoding: duration of the stream, in stream time base.802* If a source file does not specify a duration, but does specify803* a bitrate, this value will be estimated from bitrate and file size.804*805* Encoding: May be set by the caller before avformat_write_header() to806* provide a hint to the muxer about the estimated duration.807*/808int64_t duration;809810int64_t nb_frames; ///< number of frames in this stream if known or 0811812/**813* Stream disposition - a combination of AV_DISPOSITION_* flags.814* - demuxing: set by libavformat when creating the stream or in815* avformat_find_stream_info().816* - muxing: may be set by the caller before avformat_write_header().817*/818int disposition;819820enum AVDiscard discard; ///< Selects which packets can be discarded at will and do not need to be demuxed.821822/**823* sample aspect ratio (0 if unknown)824* - encoding: Set by user.825* - decoding: Set by libavformat.826*/827AVRational sample_aspect_ratio;828829AVDictionary *metadata;830831/**832* Average framerate833*834* - demuxing: May be set by libavformat when creating the stream or in835* avformat_find_stream_info().836* - muxing: May be set by the caller before avformat_write_header().837*/838AVRational avg_frame_rate;839840/**841* For streams with AV_DISPOSITION_ATTACHED_PIC disposition, this packet842* will contain the attached picture.843*844* decoding: set by libavformat, must not be modified by the caller.845* encoding: unused846*/847AVPacket attached_pic;848849#if FF_API_AVSTREAM_SIDE_DATA850/**851* An array of side data that applies to the whole stream (i.e. the852* container does not allow it to change between packets).853*854* There may be no overlap between the side data in this array and side data855* in the packets. I.e. a given side data is either exported by the muxer856* (demuxing) / set by the caller (muxing) in this array, then it never857* appears in the packets, or the side data is exported / sent through858* the packets (always in the first packet where the value becomes known or859* changes), then it does not appear in this array.860*861* - demuxing: Set by libavformat when the stream is created.862* - muxing: May be set by the caller before avformat_write_header().863*864* Freed by libavformat in avformat_free_context().865*866* @deprecated use AVStream's @ref AVCodecParameters.coded_side_data867* "codecpar side data".868*/869attribute_deprecated870AVPacketSideData *side_data;871/**872* The number of elements in the AVStream.side_data array.873*874* @deprecated use AVStream's @ref AVCodecParameters.nb_coded_side_data875* "codecpar side data".876*/877attribute_deprecated878int nb_side_data;879#endif880881/**882* Flags indicating events happening on the stream, a combination of883* AVSTREAM_EVENT_FLAG_*.884*885* - demuxing: may be set by the demuxer in avformat_open_input(),886* avformat_find_stream_info() and av_read_frame(). Flags must be cleared887* by the user once the event has been handled.888* - muxing: may be set by the user after avformat_write_header(). to889* indicate a user-triggered event. The muxer will clear the flags for890* events it has handled in av_[interleaved]_write_frame().891*/892int event_flags;893/**894* - demuxing: the demuxer read new metadata from the file and updated895* AVStream.metadata accordingly896* - muxing: the user updated AVStream.metadata and wishes the muxer to write897* it into the file898*/899#define AVSTREAM_EVENT_FLAG_METADATA_UPDATED 0x0001900/**901* - demuxing: new packets for this stream were read from the file. This902* event is informational only and does not guarantee that new packets903* for this stream will necessarily be returned from av_read_frame().904*/905#define AVSTREAM_EVENT_FLAG_NEW_PACKETS (1 << 1)906907/**908* Real base framerate of the stream.909* This is the lowest framerate with which all timestamps can be910* represented accurately (it is the least common multiple of all911* framerates in the stream). Note, this value is just a guess!912* For example, if the time base is 1/90000 and all frames have either913* approximately 3600 or 1800 timer ticks, then r_frame_rate will be 50/1.914*/915AVRational r_frame_rate;916917/**918* Number of bits in timestamps. Used for wrapping control.919*920* - demuxing: set by libavformat921* - muxing: set by libavformat922*923*/924int pts_wrap_bits;925} AVStream;926927/**928* AVStreamGroupTileGrid holds information on how to combine several929* independent images on a single canvas for presentation.930*931* The output should be a @ref AVStreamGroupTileGrid.background "background"932* colored @ref AVStreamGroupTileGrid.coded_width "coded_width" x933* @ref AVStreamGroupTileGrid.coded_height "coded_height" canvas where a934* @ref AVStreamGroupTileGrid.nb_tiles "nb_tiles" amount of tiles are placed in935* the order they appear in the @ref AVStreamGroupTileGrid.offsets "offsets"936* array, at the exact offset described for them. In particular, if two or more937* tiles overlap, the image with higher index in the938* @ref AVStreamGroupTileGrid.offsets "offsets" array takes priority.939* Note that a single image may be used multiple times, i.e. multiple entries940* in @ref AVStreamGroupTileGrid.offsets "offsets" may have the same value of941* idx.942*943* The following is an example of a simple grid with 3 rows and 4 columns:944*945* +---+---+---+---+946* | 0 | 1 | 2 | 3 |947* +---+---+---+---+948* | 4 | 5 | 6 | 7 |949* +---+---+---+---+950* | 8 | 9 |10 |11 |951* +---+---+---+---+952*953* Assuming all tiles have a dimension of 512x512, the954* @ref AVStreamGroupTileGrid.offsets "offset" of the topleft pixel of955* the first @ref AVStreamGroup.streams "stream" in the group is "0,0", the956* @ref AVStreamGroupTileGrid.offsets "offset" of the topleft pixel of957* the second @ref AVStreamGroup.streams "stream" in the group is "512,0", the958* @ref AVStreamGroupTileGrid.offsets "offset" of the topleft pixel of959* the fifth @ref AVStreamGroup.streams "stream" in the group is "0,512", the960* @ref AVStreamGroupTileGrid.offsets "offset", of the topleft pixel of961* the sixth @ref AVStreamGroup.streams "stream" in the group is "512,512",962* etc.963*964* The following is an example of a canvas with overlaping tiles:965*966* +-----------+967* | %%%%% |968* |***%%3%%@@@|969* |**0%%%%%2@@|970* |***##1@@@@@|971* | ##### |972* +-----------+973*974* Assuming a canvas with size 1024x1024 and all tiles with a dimension of975* 512x512, a possible @ref AVStreamGroupTileGrid.offsets "offset" for the976* topleft pixel of the first @ref AVStreamGroup.streams "stream" in the group977* would be 0x256, the @ref AVStreamGroupTileGrid.offsets "offset" for the978* topleft pixel of the second @ref AVStreamGroup.streams "stream" in the group979* would be 256x512, the @ref AVStreamGroupTileGrid.offsets "offset" for the980* topleft pixel of the third @ref AVStreamGroup.streams "stream" in the group981* would be 512x256, and the @ref AVStreamGroupTileGrid.offsets "offset" for982* the topleft pixel of the fourth @ref AVStreamGroup.streams "stream" in the983* group would be 256x0.984*985* sizeof(AVStreamGroupTileGrid) is not a part of the ABI and may only be986* allocated by avformat_stream_group_create().987*/988typedef struct AVStreamGroupTileGrid {989const AVClass *av_class;990991/**992* Amount of tiles in the grid.993*994* Must be > 0.995*/996unsigned int nb_tiles;997998/**999* Width of the canvas.1000*1001* Must be > 0.1002*/1003int coded_width;1004/**1005* Width of the canvas.1006*1007* Must be > 0.1008*/1009int coded_height;10101011/**1012* An @ref nb_tiles sized array of offsets in pixels from the topleft edge1013* of the canvas, indicating where each stream should be placed.1014* It must be allocated with the av_malloc() family of functions.1015*1016* - demuxing: set by libavformat, must not be modified by the caller.1017* - muxing: set by the caller before avformat_write_header().1018*1019* Freed by libavformat in avformat_free_context().1020*/1021struct {1022/**1023* Index of the stream in the group this tile references.1024*1025* Must be < @ref AVStreamGroup.nb_streams "nb_streams".1026*/1027unsigned int idx;1028/**1029* Offset in pixels from the left edge of the canvas where the tile1030* should be placed.1031*/1032int horizontal;1033/**1034* Offset in pixels from the top edge of the canvas where the tile1035* should be placed.1036*/1037int vertical;1038} *offsets;10391040/**1041* The pixel value per channel in RGBA format used if no pixel of any tile1042* is located at a particular pixel location.1043*1044* @see av_image_fill_color().1045* @see av_parse_color().1046*/1047uint8_t background[4];10481049/**1050* Offset in pixels from the left edge of the canvas where the actual image1051* meant for presentation starts.1052*1053* This field must be >= 0 and < @ref coded_width.1054*/1055int horizontal_offset;1056/**1057* Offset in pixels from the top edge of the canvas where the actual image1058* meant for presentation starts.1059*1060* This field must be >= 0 and < @ref coded_height.1061*/1062int vertical_offset;10631064/**1065* Width of the final image for presentation.1066*1067* Must be > 0 and <= (@ref coded_width - @ref horizontal_offset).1068* When it's not equal to (@ref coded_width - @ref horizontal_offset), the1069* result of (@ref coded_width - width - @ref horizontal_offset) is the1070* amount amount of pixels to be cropped from the right edge of the1071* final image before presentation.1072*/1073int width;1074/**1075* Height of the final image for presentation.1076*1077* Must be > 0 and <= (@ref coded_height - @ref vertical_offset).1078* When it's not equal to (@ref coded_height - @ref vertical_offset), the1079* result of (@ref coded_height - height - @ref vertical_offset) is the1080* amount amount of pixels to be cropped from the bottom edge of the1081* final image before presentation.1082*/1083int height;1084} AVStreamGroupTileGrid;10851086/**1087* AVStreamGroupLCEVC is meant to define the relation between video streams1088* and a data stream containing LCEVC enhancement layer NALUs.1089*1090* No more than one stream of @ref AVCodecParameters.codec_type "codec_type"1091* AVMEDIA_TYPE_DATA shall be present, and it must be of1092* @ref AVCodecParameters.codec_id "codec_id" AV_CODEC_ID_LCEVC.1093*/1094typedef struct AVStreamGroupLCEVC {1095const AVClass *av_class;10961097/**1098* Index of the LCEVC data stream in AVStreamGroup.1099*/1100unsigned int lcevc_index;1101/**1102* Width of the final stream for presentation.1103*/1104int width;1105/**1106* Height of the final image for presentation.1107*/1108int height;1109} AVStreamGroupLCEVC;11101111enum AVStreamGroupParamsType {1112AV_STREAM_GROUP_PARAMS_NONE,1113AV_STREAM_GROUP_PARAMS_IAMF_AUDIO_ELEMENT,1114AV_STREAM_GROUP_PARAMS_IAMF_MIX_PRESENTATION,1115AV_STREAM_GROUP_PARAMS_TILE_GRID,1116AV_STREAM_GROUP_PARAMS_LCEVC,1117};11181119struct AVIAMFAudioElement;1120struct AVIAMFMixPresentation;11211122typedef struct AVStreamGroup {1123/**1124* A class for @ref avoptions. Set by avformat_stream_group_create().1125*/1126const AVClass *av_class;11271128void *priv_data;11291130/**1131* Group index in AVFormatContext.1132*/1133unsigned int index;11341135/**1136* Group type-specific group ID.1137*1138* decoding: set by libavformat1139* encoding: may set by the user1140*/1141int64_t id;11421143/**1144* Group type1145*1146* decoding: set by libavformat on group creation1147* encoding: set by avformat_stream_group_create()1148*/1149enum AVStreamGroupParamsType type;11501151/**1152* Group type-specific parameters1153*/1154union {1155struct AVIAMFAudioElement *iamf_audio_element;1156struct AVIAMFMixPresentation *iamf_mix_presentation;1157struct AVStreamGroupTileGrid *tile_grid;1158struct AVStreamGroupLCEVC *lcevc;1159} params;11601161/**1162* Metadata that applies to the whole group.1163*1164* - demuxing: set by libavformat on group creation1165* - muxing: may be set by the caller before avformat_write_header()1166*1167* Freed by libavformat in avformat_free_context().1168*/1169AVDictionary *metadata;11701171/**1172* Number of elements in AVStreamGroup.streams.1173*1174* Set by avformat_stream_group_add_stream() must not be modified by any other code.1175*/1176unsigned int nb_streams;11771178/**1179* A list of streams in the group. New entries are created with1180* avformat_stream_group_add_stream().1181*1182* - demuxing: entries are created by libavformat on group creation.1183* If AVFMTCTX_NOHEADER is set in ctx_flags, then new entries may also1184* appear in av_read_frame().1185* - muxing: entries are created by the user before avformat_write_header().1186*1187* Freed by libavformat in avformat_free_context().1188*/1189AVStream **streams;11901191/**1192* Stream group disposition - a combination of AV_DISPOSITION_* flags.1193* This field currently applies to all defined AVStreamGroupParamsType.1194*1195* - demuxing: set by libavformat when creating the group or in1196* avformat_find_stream_info().1197* - muxing: may be set by the caller before avformat_write_header().1198*/1199int disposition;1200} AVStreamGroup;12011202struct AVCodecParserContext *av_stream_get_parser(const AVStream *s);12031204#define AV_PROGRAM_RUNNING 112051206/**1207* New fields can be added to the end with minor version bumps.1208* Removal, reordering and changes to existing fields require a major1209* version bump.1210* sizeof(AVProgram) must not be used outside libav*.1211*/1212typedef struct AVProgram {1213int id;1214int flags;1215enum AVDiscard discard; ///< selects which program to discard and which to feed to the caller1216unsigned int *stream_index;1217unsigned int nb_stream_indexes;1218AVDictionary *metadata;12191220int program_num;1221int pmt_pid;1222int pcr_pid;1223int pmt_version;12241225/*****************************************************************1226* All fields below this line are not part of the public API. They1227* may not be used outside of libavformat and can be changed and1228* removed at will.1229* New public fields should be added right above.1230*****************************************************************1231*/1232int64_t start_time;1233int64_t end_time;12341235int64_t pts_wrap_reference; ///< reference dts for wrap detection1236int pts_wrap_behavior; ///< behavior on wrap detection1237} AVProgram;12381239#define AVFMTCTX_NOHEADER 0x0001 /**< signal that no header is present1240(streams are added dynamically) */1241#define AVFMTCTX_UNSEEKABLE 0x0002 /**< signal that the stream is definitely1242not seekable, and attempts to call the1243seek function will fail. For some1244network protocols (e.g. HLS), this can1245change dynamically at runtime. */12461247typedef struct AVChapter {1248int64_t id; ///< unique ID to identify the chapter1249AVRational time_base; ///< time base in which the start/end timestamps are specified1250int64_t start, end; ///< chapter start/end time in time_base units1251AVDictionary *metadata;1252} AVChapter;125312541255/**1256* Callback used by devices to communicate with application.1257*/1258typedef int (*av_format_control_message)(struct AVFormatContext *s, int type,1259void *data, size_t data_size);12601261typedef int (*AVOpenCallback)(struct AVFormatContext *s, AVIOContext **pb, const char *url, int flags,1262const AVIOInterruptCB *int_cb, AVDictionary **options);12631264/**1265* The duration of a video can be estimated through various ways, and this enum can be used1266* to know how the duration was estimated.1267*/1268enum AVDurationEstimationMethod {1269AVFMT_DURATION_FROM_PTS, ///< Duration accurately estimated from PTSes1270AVFMT_DURATION_FROM_STREAM, ///< Duration estimated from a stream with a known duration1271AVFMT_DURATION_FROM_BITRATE ///< Duration estimated from bitrate (less accurate)1272};12731274/**1275* Format I/O context.1276* New fields can be added to the end with minor version bumps.1277* Removal, reordering and changes to existing fields require a major1278* version bump.1279* sizeof(AVFormatContext) must not be used outside libav*, use1280* avformat_alloc_context() to create an AVFormatContext.1281*1282* Fields can be accessed through AVOptions (av_opt*),1283* the name string used matches the associated command line parameter name and1284* can be found in libavformat/options_table.h.1285* The AVOption/command line parameter names differ in some cases from the C1286* structure field names for historic reasons or brevity.1287*/1288typedef struct AVFormatContext {1289/**1290* A class for logging and @ref avoptions. Set by avformat_alloc_context().1291* Exports (de)muxer private options if they exist.1292*/1293const AVClass *av_class;12941295/**1296* The input container format.1297*1298* Demuxing only, set by avformat_open_input().1299*/1300const struct AVInputFormat *iformat;13011302/**1303* The output container format.1304*1305* Muxing only, must be set by the caller before avformat_write_header().1306*/1307const struct AVOutputFormat *oformat;13081309/**1310* Format private data. This is an AVOptions-enabled struct1311* if and only if iformat/oformat.priv_class is not NULL.1312*1313* - muxing: set by avformat_write_header()1314* - demuxing: set by avformat_open_input()1315*/1316void *priv_data;13171318/**1319* I/O context.1320*1321* - demuxing: either set by the user before avformat_open_input() (then1322* the user must close it manually) or set by avformat_open_input().1323* - muxing: set by the user before avformat_write_header(). The caller must1324* take care of closing / freeing the IO context.1325*1326* Do NOT set this field if AVFMT_NOFILE flag is set in1327* iformat/oformat.flags. In such a case, the (de)muxer will handle1328* I/O in some other way and this field will be NULL.1329*/1330AVIOContext *pb;13311332/* stream info */1333/**1334* Flags signalling stream properties. A combination of AVFMTCTX_*.1335* Set by libavformat.1336*/1337int ctx_flags;13381339/**1340* Number of elements in AVFormatContext.streams.1341*1342* Set by avformat_new_stream(), must not be modified by any other code.1343*/1344unsigned int nb_streams;1345/**1346* A list of all streams in the file. New streams are created with1347* avformat_new_stream().1348*1349* - demuxing: streams are created by libavformat in avformat_open_input().1350* If AVFMTCTX_NOHEADER is set in ctx_flags, then new streams may also1351* appear in av_read_frame().1352* - muxing: streams are created by the user before avformat_write_header().1353*1354* Freed by libavformat in avformat_free_context().1355*/1356AVStream **streams;13571358/**1359* Number of elements in AVFormatContext.stream_groups.1360*1361* Set by avformat_stream_group_create(), must not be modified by any other code.1362*/1363unsigned int nb_stream_groups;1364/**1365* A list of all stream groups in the file. New groups are created with1366* avformat_stream_group_create(), and filled with avformat_stream_group_add_stream().1367*1368* - demuxing: groups may be created by libavformat in avformat_open_input().1369* If AVFMTCTX_NOHEADER is set in ctx_flags, then new groups may also1370* appear in av_read_frame().1371* - muxing: groups may be created by the user before avformat_write_header().1372*1373* Freed by libavformat in avformat_free_context().1374*/1375AVStreamGroup **stream_groups;13761377/**1378* Number of chapters in AVChapter array.1379* When muxing, chapters are normally written in the file header,1380* so nb_chapters should normally be initialized before write_header1381* is called. Some muxers (e.g. mov and mkv) can also write chapters1382* in the trailer. To write chapters in the trailer, nb_chapters1383* must be zero when write_header is called and non-zero when1384* write_trailer is called.1385* - muxing: set by user1386* - demuxing: set by libavformat1387*/1388unsigned int nb_chapters;1389AVChapter **chapters;13901391/**1392* input or output URL. Unlike the old filename field, this field has no1393* length restriction.1394*1395* - demuxing: set by avformat_open_input(), initialized to an empty1396* string if url parameter was NULL in avformat_open_input().1397* - muxing: may be set by the caller before calling avformat_write_header()1398* (or avformat_init_output() if that is called first) to a string1399* which is freeable by av_free(). Set to an empty string if it1400* was NULL in avformat_init_output().1401*1402* Freed by libavformat in avformat_free_context().1403*/1404char *url;14051406/**1407* Position of the first frame of the component, in1408* AV_TIME_BASE fractional seconds. NEVER set this value directly:1409* It is deduced from the AVStream values.1410*1411* Demuxing only, set by libavformat.1412*/1413int64_t start_time;14141415/**1416* Duration of the stream, in AV_TIME_BASE fractional1417* seconds. Only set this value if you know none of the individual stream1418* durations and also do not set any of them. This is deduced from the1419* AVStream values if not set.1420*1421* Demuxing only, set by libavformat.1422*/1423int64_t duration;14241425/**1426* Total stream bitrate in bit/s, 0 if not1427* available. Never set it directly if the file_size and the1428* duration are known as FFmpeg can compute it automatically.1429*/1430int64_t bit_rate;14311432unsigned int packet_size;1433int max_delay;14341435/**1436* Flags modifying the (de)muxer behaviour. A combination of AVFMT_FLAG_*.1437* Set by the user before avformat_open_input() / avformat_write_header().1438*/1439int flags;1440#define AVFMT_FLAG_GENPTS 0x0001 ///< Generate missing pts even if it requires parsing future frames.1441#define AVFMT_FLAG_IGNIDX 0x0002 ///< Ignore index.1442#define AVFMT_FLAG_NONBLOCK 0x0004 ///< Do not block when reading packets from input.1443#define AVFMT_FLAG_IGNDTS 0x0008 ///< Ignore DTS on frames that contain both DTS & PTS1444#define AVFMT_FLAG_NOFILLIN 0x0010 ///< Do not infer any values from other values, just return what is stored in the container1445#define AVFMT_FLAG_NOPARSE 0x0020 ///< Do not use AVParsers, you also must set AVFMT_FLAG_NOFILLIN as the fillin code works on frames and no parsing -> no frames. Also seeking to frames can not work if parsing to find frame boundaries has been disabled1446#define AVFMT_FLAG_NOBUFFER 0x0040 ///< Do not buffer frames when possible1447#define AVFMT_FLAG_CUSTOM_IO 0x0080 ///< The caller has supplied a custom AVIOContext, don't avio_close() it.1448#define AVFMT_FLAG_DISCARD_CORRUPT 0x0100 ///< Discard frames marked corrupted1449#define AVFMT_FLAG_FLUSH_PACKETS 0x0200 ///< Flush the AVIOContext every packet.1450/**1451* When muxing, try to avoid writing any random/volatile data to the output.1452* This includes any random IDs, real-time timestamps/dates, muxer version, etc.1453*1454* This flag is mainly intended for testing.1455*/1456#define AVFMT_FLAG_BITEXACT 0x04001457#define AVFMT_FLAG_SORT_DTS 0x10000 ///< try to interleave outputted packets by dts (using this flag can slow demuxing down)1458#define AVFMT_FLAG_FAST_SEEK 0x80000 ///< Enable fast, but inaccurate seeks for some formats1459#if FF_API_LAVF_SHORTEST1460#define AVFMT_FLAG_SHORTEST 0x100000 ///< Stop muxing when the shortest stream stops.1461#endif1462#define AVFMT_FLAG_AUTO_BSF 0x200000 ///< Add bitstream filters as requested by the muxer14631464/**1465* Maximum number of bytes read from input in order to determine stream1466* properties. Used when reading the global header and in1467* avformat_find_stream_info().1468*1469* Demuxing only, set by the caller before avformat_open_input().1470*1471* @note this is \e not used for determining the \ref AVInputFormat1472* "input format"1473* @see format_probesize1474*/1475int64_t probesize;14761477/**1478* Maximum duration (in AV_TIME_BASE units) of the data read1479* from input in avformat_find_stream_info().1480* Demuxing only, set by the caller before avformat_find_stream_info().1481* Can be set to 0 to let avformat choose using a heuristic.1482*/1483int64_t max_analyze_duration;14841485const uint8_t *key;1486int keylen;14871488unsigned int nb_programs;1489AVProgram **programs;14901491/**1492* Forced video codec_id.1493* Demuxing: Set by user.1494*/1495enum AVCodecID video_codec_id;14961497/**1498* Forced audio codec_id.1499* Demuxing: Set by user.1500*/1501enum AVCodecID audio_codec_id;15021503/**1504* Forced subtitle codec_id.1505* Demuxing: Set by user.1506*/1507enum AVCodecID subtitle_codec_id;15081509/**1510* Forced Data codec_id.1511* Demuxing: Set by user.1512*/1513enum AVCodecID data_codec_id;15141515/**1516* Metadata that applies to the whole file.1517*1518* - demuxing: set by libavformat in avformat_open_input()1519* - muxing: may be set by the caller before avformat_write_header()1520*1521* Freed by libavformat in avformat_free_context().1522*/1523AVDictionary *metadata;15241525/**1526* Start time of the stream in real world time, in microseconds1527* since the Unix epoch (00:00 1st January 1970). That is, pts=0 in the1528* stream was captured at this real world time.1529* - muxing: Set by the caller before avformat_write_header(). If set to1530* either 0 or AV_NOPTS_VALUE, then the current wall-time will1531* be used.1532* - demuxing: Set by libavformat. AV_NOPTS_VALUE if unknown. Note that1533* the value may become known after some number of frames1534* have been received.1535*/1536int64_t start_time_realtime;15371538/**1539* The number of frames used for determining the framerate in1540* avformat_find_stream_info().1541* Demuxing only, set by the caller before avformat_find_stream_info().1542*/1543int fps_probe_size;15441545/**1546* Error recognition; higher values will detect more errors but may1547* misdetect some more or less valid parts as errors.1548* Demuxing only, set by the caller before avformat_open_input().1549*/1550int error_recognition;15511552/**1553* Custom interrupt callbacks for the I/O layer.1554*1555* demuxing: set by the user before avformat_open_input().1556* muxing: set by the user before avformat_write_header()1557* (mainly useful for AVFMT_NOFILE formats). The callback1558* should also be passed to avio_open2() if it's used to1559* open the file.1560*/1561AVIOInterruptCB interrupt_callback;15621563/**1564* Flags to enable debugging.1565*/1566int debug;1567#define FF_FDEBUG_TS 0x000115681569/**1570* The maximum number of streams.1571* - encoding: unused1572* - decoding: set by user1573*/1574int max_streams;15751576/**1577* Maximum amount of memory in bytes to use for the index of each stream.1578* If the index exceeds this size, entries will be discarded as1579* needed to maintain a smaller size. This can lead to slower or less1580* accurate seeking (depends on demuxer).1581* Demuxers for which a full in-memory index is mandatory will ignore1582* this.1583* - muxing: unused1584* - demuxing: set by user1585*/1586unsigned int max_index_size;15871588/**1589* Maximum amount of memory in bytes to use for buffering frames1590* obtained from realtime capture devices.1591*/1592unsigned int max_picture_buffer;15931594/**1595* Maximum buffering duration for interleaving.1596*1597* To ensure all the streams are interleaved correctly,1598* av_interleaved_write_frame() will wait until it has at least one packet1599* for each stream before actually writing any packets to the output file.1600* When some streams are "sparse" (i.e. there are large gaps between1601* successive packets), this can result in excessive buffering.1602*1603* This field specifies the maximum difference between the timestamps of the1604* first and the last packet in the muxing queue, above which libavformat1605* will output a packet regardless of whether it has queued a packet for all1606* the streams.1607*1608* Muxing only, set by the caller before avformat_write_header().1609*/1610int64_t max_interleave_delta;16111612/**1613* Maximum number of packets to read while waiting for the first timestamp.1614* Decoding only.1615*/1616int max_ts_probe;16171618/**1619* Max chunk time in microseconds.1620* Note, not all formats support this and unpredictable things may happen if it is used when not supported.1621* - encoding: Set by user1622* - decoding: unused1623*/1624int max_chunk_duration;16251626/**1627* Max chunk size in bytes1628* Note, not all formats support this and unpredictable things may happen if it is used when not supported.1629* - encoding: Set by user1630* - decoding: unused1631*/1632int max_chunk_size;16331634/**1635* Maximum number of packets that can be probed1636* - encoding: unused1637* - decoding: set by user1638*/1639int max_probe_packets;16401641/**1642* Allow non-standard and experimental extension1643* @see AVCodecContext.strict_std_compliance1644*/1645int strict_std_compliance;16461647/**1648* Flags indicating events happening on the file, a combination of1649* AVFMT_EVENT_FLAG_*.1650*1651* - demuxing: may be set by the demuxer in avformat_open_input(),1652* avformat_find_stream_info() and av_read_frame(). Flags must be cleared1653* by the user once the event has been handled.1654* - muxing: may be set by the user after avformat_write_header() to1655* indicate a user-triggered event. The muxer will clear the flags for1656* events it has handled in av_[interleaved]_write_frame().1657*/1658int event_flags;1659/**1660* - demuxing: the demuxer read new metadata from the file and updated1661* AVFormatContext.metadata accordingly1662* - muxing: the user updated AVFormatContext.metadata and wishes the muxer to1663* write it into the file1664*/1665#define AVFMT_EVENT_FLAG_METADATA_UPDATED 0x0001166616671668/**1669* Avoid negative timestamps during muxing.1670* Any value of the AVFMT_AVOID_NEG_TS_* constants.1671* Note, this works better when using av_interleaved_write_frame().1672* - muxing: Set by user1673* - demuxing: unused1674*/1675int avoid_negative_ts;1676#define AVFMT_AVOID_NEG_TS_AUTO -1 ///< Enabled when required by target format1677#define AVFMT_AVOID_NEG_TS_DISABLED 0 ///< Do not shift timestamps even when they are negative.1678#define AVFMT_AVOID_NEG_TS_MAKE_NON_NEGATIVE 1 ///< Shift timestamps so they are non negative1679#define AVFMT_AVOID_NEG_TS_MAKE_ZERO 2 ///< Shift timestamps so that they start at 016801681/**1682* Audio preload in microseconds.1683* Note, not all formats support this and unpredictable things may happen if it is used when not supported.1684* - encoding: Set by user1685* - decoding: unused1686*/1687int audio_preload;16881689/**1690* forces the use of wallclock timestamps as pts/dts of packets1691* This has undefined results in the presence of B frames.1692* - encoding: unused1693* - decoding: Set by user1694*/1695int use_wallclock_as_timestamps;16961697/**1698* Skip duration calcuation in estimate_timings_from_pts.1699* - encoding: unused1700* - decoding: set by user1701*1702* @see duration_probesize1703*/1704int skip_estimate_duration_from_pts;17051706/**1707* avio flags, used to force AVIO_FLAG_DIRECT.1708* - encoding: unused1709* - decoding: Set by user1710*/1711int avio_flags;17121713/**1714* The duration field can be estimated through various ways, and this field can be used1715* to know how the duration was estimated.1716* - encoding: unused1717* - decoding: Read by user1718*/1719enum AVDurationEstimationMethod duration_estimation_method;17201721/**1722* Skip initial bytes when opening stream1723* - encoding: unused1724* - decoding: Set by user1725*/1726int64_t skip_initial_bytes;17271728/**1729* Correct single timestamp overflows1730* - encoding: unused1731* - decoding: Set by user1732*/1733unsigned int correct_ts_overflow;17341735/**1736* Force seeking to any (also non key) frames.1737* - encoding: unused1738* - decoding: Set by user1739*/1740int seek2any;17411742/**1743* Flush the I/O context after each packet.1744* - encoding: Set by user1745* - decoding: unused1746*/1747int flush_packets;17481749/**1750* format probing score.1751* The maximal score is AVPROBE_SCORE_MAX, its set when the demuxer probes1752* the format.1753* - encoding: unused1754* - decoding: set by avformat, read by user1755*/1756int probe_score;17571758/**1759* Maximum number of bytes read from input in order to identify the1760* \ref AVInputFormat "input format". Only used when the format is not set1761* explicitly by the caller.1762*1763* Demuxing only, set by the caller before avformat_open_input().1764*1765* @see probesize1766*/1767int format_probesize;17681769/**1770* ',' separated list of allowed decoders.1771* If NULL then all are allowed1772* - encoding: unused1773* - decoding: set by user1774*/1775char *codec_whitelist;17761777/**1778* ',' separated list of allowed demuxers.1779* If NULL then all are allowed1780* - encoding: unused1781* - decoding: set by user1782*/1783char *format_whitelist;17841785/**1786* ',' separated list of allowed protocols.1787* - encoding: unused1788* - decoding: set by user1789*/1790char *protocol_whitelist;17911792/**1793* ',' separated list of disallowed protocols.1794* - encoding: unused1795* - decoding: set by user1796*/1797char *protocol_blacklist;17981799/**1800* IO repositioned flag.1801* This is set by avformat when the underlaying IO context read pointer1802* is repositioned, for example when doing byte based seeking.1803* Demuxers can use the flag to detect such changes.1804*/1805int io_repositioned;18061807/**1808* Forced video codec.1809* This allows forcing a specific decoder, even when there are multiple with1810* the same codec_id.1811* Demuxing: Set by user1812*/1813const struct AVCodec *video_codec;18141815/**1816* Forced audio codec.1817* This allows forcing a specific decoder, even when there are multiple with1818* the same codec_id.1819* Demuxing: Set by user1820*/1821const struct AVCodec *audio_codec;18221823/**1824* Forced subtitle codec.1825* This allows forcing a specific decoder, even when there are multiple with1826* the same codec_id.1827* Demuxing: Set by user1828*/1829const struct AVCodec *subtitle_codec;18301831/**1832* Forced data codec.1833* This allows forcing a specific decoder, even when there are multiple with1834* the same codec_id.1835* Demuxing: Set by user1836*/1837const struct AVCodec *data_codec;18381839/**1840* Number of bytes to be written as padding in a metadata header.1841* Demuxing: Unused.1842* Muxing: Set by user.1843*/1844int metadata_header_padding;18451846/**1847* User data.1848* This is a place for some private data of the user.1849*/1850void *opaque;18511852/**1853* Callback used by devices to communicate with application.1854*/1855av_format_control_message control_message_cb;18561857/**1858* Output timestamp offset, in microseconds.1859* Muxing: set by user1860*/1861int64_t output_ts_offset;18621863/**1864* dump format separator.1865* can be ", " or "\n " or anything else1866* - muxing: Set by user.1867* - demuxing: Set by user.1868*/1869uint8_t *dump_separator;18701871/**1872* A callback for opening new IO streams.1873*1874* Whenever a muxer or a demuxer needs to open an IO stream (typically from1875* avformat_open_input() for demuxers, but for certain formats can happen at1876* other times as well), it will call this callback to obtain an IO context.1877*1878* @param s the format context1879* @param pb on success, the newly opened IO context should be returned here1880* @param url the url to open1881* @param flags a combination of AVIO_FLAG_*1882* @param options a dictionary of additional options, with the same1883* semantics as in avio_open2()1884* @return 0 on success, a negative AVERROR code on failure1885*1886* @note Certain muxers and demuxers do nesting, i.e. they open one or more1887* additional internal format contexts. Thus the AVFormatContext pointer1888* passed to this callback may be different from the one facing the caller.1889* It will, however, have the same 'opaque' field.1890*/1891int (*io_open)(struct AVFormatContext *s, AVIOContext **pb, const char *url,1892int flags, AVDictionary **options);18931894/**1895* A callback for closing the streams opened with AVFormatContext.io_open().1896*1897* Using this is preferred over io_close, because this can return an error.1898* Therefore this callback is used instead of io_close by the generic1899* libavformat code if io_close is NULL or the default.1900*1901* @param s the format context1902* @param pb IO context to be closed and freed1903* @return 0 on success, a negative AVERROR code on failure1904*/1905int (*io_close2)(struct AVFormatContext *s, AVIOContext *pb);19061907/**1908* Maximum number of bytes read from input in order to determine stream durations1909* when using estimate_timings_from_pts in avformat_find_stream_info().1910* Demuxing only, set by the caller before avformat_find_stream_info().1911* Can be set to 0 to let avformat choose using a heuristic.1912*1913* @see skip_estimate_duration_from_pts1914*/1915int64_t duration_probesize;1916} AVFormatContext;19171918/**1919* This function will cause global side data to be injected in the next packet1920* of each stream as well as after any subsequent seek.1921*1922* @note global side data is always available in every AVStream's1923* @ref AVCodecParameters.coded_side_data "codecpar side data" array, and1924* in a @ref AVCodecContext.coded_side_data "decoder's side data" array if1925* initialized with said stream's codecpar.1926* @see av_packet_side_data_get()1927*/1928void av_format_inject_global_side_data(AVFormatContext *s);19291930#if FF_API_GET_DUR_ESTIMATE_METHOD1931/**1932* Returns the method used to set ctx->duration.1933*1934* @return AVFMT_DURATION_FROM_PTS, AVFMT_DURATION_FROM_STREAM, or AVFMT_DURATION_FROM_BITRATE.1935* @deprecated duration_estimation_method is public and can be read directly.1936*/1937attribute_deprecated1938enum AVDurationEstimationMethod av_fmt_ctx_get_duration_estimation_method(const AVFormatContext* ctx);1939#endif19401941/**1942* @defgroup lavf_core Core functions1943* @ingroup libavf1944*1945* Functions for querying libavformat capabilities, allocating core structures,1946* etc.1947* @{1948*/19491950/**1951* Return the LIBAVFORMAT_VERSION_INT constant.1952*/1953unsigned avformat_version(void);19541955/**1956* Return the libavformat build-time configuration.1957*/1958const char *avformat_configuration(void);19591960/**1961* Return the libavformat license.1962*/1963const char *avformat_license(void);19641965/**1966* Do global initialization of network libraries. This is optional,1967* and not recommended anymore.1968*1969* This functions only exists to work around thread-safety issues1970* with older GnuTLS or OpenSSL libraries. If libavformat is linked1971* to newer versions of those libraries, or if you do not use them,1972* calling this function is unnecessary. Otherwise, you need to call1973* this function before any other threads using them are started.1974*1975* This function will be deprecated once support for older GnuTLS and1976* OpenSSL libraries is removed, and this function has no purpose1977* anymore.1978*/1979int avformat_network_init(void);19801981/**1982* Undo the initialization done by avformat_network_init. Call it only1983* once for each time you called avformat_network_init.1984*/1985int avformat_network_deinit(void);19861987/**1988* Iterate over all registered muxers.1989*1990* @param opaque a pointer where libavformat will store the iteration state. Must1991* point to NULL to start the iteration.1992*1993* @return the next registered muxer or NULL when the iteration is1994* finished1995*/1996const AVOutputFormat *av_muxer_iterate(void **opaque);19971998/**1999* Iterate over all registered demuxers.2000*2001* @param opaque a pointer where libavformat will store the iteration state.2002* Must point to NULL to start the iteration.2003*2004* @return the next registered demuxer or NULL when the iteration is2005* finished2006*/2007const AVInputFormat *av_demuxer_iterate(void **opaque);20082009/**2010* Allocate an AVFormatContext.2011* avformat_free_context() can be used to free the context and everything2012* allocated by the framework within it.2013*/2014AVFormatContext *avformat_alloc_context(void);20152016/**2017* Free an AVFormatContext and all its streams.2018* @param s context to free2019*/2020void avformat_free_context(AVFormatContext *s);20212022/**2023* Get the AVClass for AVFormatContext. It can be used in combination with2024* AV_OPT_SEARCH_FAKE_OBJ for examining options.2025*2026* @see av_opt_find().2027*/2028const AVClass *avformat_get_class(void);20292030/**2031* Get the AVClass for AVStream. It can be used in combination with2032* AV_OPT_SEARCH_FAKE_OBJ for examining options.2033*2034* @see av_opt_find().2035*/2036const AVClass *av_stream_get_class(void);20372038/**2039* Get the AVClass for AVStreamGroup. It can be used in combination with2040* AV_OPT_SEARCH_FAKE_OBJ for examining options.2041*2042* @see av_opt_find().2043*/2044const AVClass *av_stream_group_get_class(void);20452046/**2047* @return a string identifying the stream group type, or NULL if unknown2048*/2049const char *avformat_stream_group_name(enum AVStreamGroupParamsType type);20502051/**2052* Add a new empty stream group to a media file.2053*2054* When demuxing, it may be called by the demuxer in read_header(). If the2055* flag AVFMTCTX_NOHEADER is set in s.ctx_flags, then it may also2056* be called in read_packet().2057*2058* When muxing, may be called by the user before avformat_write_header().2059*2060* User is required to call avformat_free_context() to clean up the allocation2061* by avformat_stream_group_create().2062*2063* New streams can be added to the group with avformat_stream_group_add_stream().2064*2065* @param s media file handle2066*2067* @return newly created group or NULL on error.2068* @see avformat_new_stream, avformat_stream_group_add_stream.2069*/2070AVStreamGroup *avformat_stream_group_create(AVFormatContext *s,2071enum AVStreamGroupParamsType type,2072AVDictionary **options);20732074/**2075* Add a new stream to a media file.2076*2077* When demuxing, it is called by the demuxer in read_header(). If the2078* flag AVFMTCTX_NOHEADER is set in s.ctx_flags, then it may also2079* be called in read_packet().2080*2081* When muxing, should be called by the user before avformat_write_header().2082*2083* User is required to call avformat_free_context() to clean up the allocation2084* by avformat_new_stream().2085*2086* @param s media file handle2087* @param c unused, does nothing2088*2089* @return newly created stream or NULL on error.2090*/2091AVStream *avformat_new_stream(AVFormatContext *s, const struct AVCodec *c);20922093/**2094* Add an already allocated stream to a stream group.2095*2096* When demuxing, it may be called by the demuxer in read_header(). If the2097* flag AVFMTCTX_NOHEADER is set in s.ctx_flags, then it may also2098* be called in read_packet().2099*2100* When muxing, may be called by the user before avformat_write_header() after2101* having allocated a new group with avformat_stream_group_create() and stream with2102* avformat_new_stream().2103*2104* User is required to call avformat_free_context() to clean up the allocation2105* by avformat_stream_group_add_stream().2106*2107* @param stg stream group belonging to a media file.2108* @param st stream in the media file to add to the group.2109*2110* @retval 0 success2111* @retval AVERROR(EEXIST) the stream was already in the group2112* @retval "another negative error code" legitimate errors2113*2114* @see avformat_new_stream, avformat_stream_group_create.2115*/2116int avformat_stream_group_add_stream(AVStreamGroup *stg, AVStream *st);21172118#if FF_API_AVSTREAM_SIDE_DATA2119/**2120* Wrap an existing array as stream side data.2121*2122* @param st stream2123* @param type side information type2124* @param data the side data array. It must be allocated with the av_malloc()2125* family of functions. The ownership of the data is transferred to2126* st.2127* @param size side information size2128*2129* @return zero on success, a negative AVERROR code on failure. On failure,2130* the stream is unchanged and the data remains owned by the caller.2131* @deprecated use av_packet_side_data_add() with the stream's2132* @ref AVCodecParameters.coded_side_data "codecpar side data"2133*/2134attribute_deprecated2135int av_stream_add_side_data(AVStream *st, enum AVPacketSideDataType type,2136uint8_t *data, size_t size);21372138/**2139* Allocate new information from stream.2140*2141* @param stream stream2142* @param type desired side information type2143* @param size side information size2144*2145* @return pointer to fresh allocated data or NULL otherwise2146* @deprecated use av_packet_side_data_new() with the stream's2147* @ref AVCodecParameters.coded_side_data "codecpar side data"2148*/2149attribute_deprecated2150uint8_t *av_stream_new_side_data(AVStream *stream,2151enum AVPacketSideDataType type, size_t size);2152/**2153* Get side information from stream.2154*2155* @param stream stream2156* @param type desired side information type2157* @param size If supplied, *size will be set to the size of the side data2158* or to zero if the desired side data is not present.2159*2160* @return pointer to data if present or NULL otherwise2161* @deprecated use av_packet_side_data_get() with the stream's2162* @ref AVCodecParameters.coded_side_data "codecpar side data"2163*/2164attribute_deprecated2165uint8_t *av_stream_get_side_data(const AVStream *stream,2166enum AVPacketSideDataType type, size_t *size);2167#endif21682169AVProgram *av_new_program(AVFormatContext *s, int id);21702171/**2172* @}2173*/217421752176/**2177* Allocate an AVFormatContext for an output format.2178* avformat_free_context() can be used to free the context and2179* everything allocated by the framework within it.2180*2181* @param ctx pointee is set to the created format context,2182* or to NULL in case of failure2183* @param oformat format to use for allocating the context, if NULL2184* format_name and filename are used instead2185* @param format_name the name of output format to use for allocating the2186* context, if NULL filename is used instead2187* @param filename the name of the filename to use for allocating the2188* context, may be NULL2189*2190* @return >= 0 in case of success, a negative AVERROR code in case of2191* failure2192*/2193int avformat_alloc_output_context2(AVFormatContext **ctx, const AVOutputFormat *oformat,2194const char *format_name, const char *filename);21952196/**2197* @addtogroup lavf_decoding2198* @{2199*/22002201/**2202* Find AVInputFormat based on the short name of the input format.2203*/2204const AVInputFormat *av_find_input_format(const char *short_name);22052206/**2207* Guess the file format.2208*2209* @param pd data to be probed2210* @param is_opened Whether the file is already opened; determines whether2211* demuxers with or without AVFMT_NOFILE are probed.2212*/2213const AVInputFormat *av_probe_input_format(const AVProbeData *pd, int is_opened);22142215/**2216* Guess the file format.2217*2218* @param pd data to be probed2219* @param is_opened Whether the file is already opened; determines whether2220* demuxers with or without AVFMT_NOFILE are probed.2221* @param score_max A probe score larger that this is required to accept a2222* detection, the variable is set to the actual detection2223* score afterwards.2224* If the score is <= AVPROBE_SCORE_MAX / 4 it is recommended2225* to retry with a larger probe buffer.2226*/2227const AVInputFormat *av_probe_input_format2(const AVProbeData *pd,2228int is_opened, int *score_max);22292230/**2231* Guess the file format.2232*2233* @param is_opened Whether the file is already opened; determines whether2234* demuxers with or without AVFMT_NOFILE are probed.2235* @param score_ret The score of the best detection.2236*/2237const AVInputFormat *av_probe_input_format3(const AVProbeData *pd,2238int is_opened, int *score_ret);22392240/**2241* Probe a bytestream to determine the input format. Each time a probe returns2242* with a score that is too low, the probe buffer size is increased and another2243* attempt is made. When the maximum probe size is reached, the input format2244* with the highest score is returned.2245*2246* @param pb the bytestream to probe2247* @param fmt the input format is put here2248* @param url the url of the stream2249* @param logctx the log context2250* @param offset the offset within the bytestream to probe from2251* @param max_probe_size the maximum probe buffer size (zero for default)2252*2253* @return the score in case of success, a negative value corresponding to an2254* the maximal score is AVPROBE_SCORE_MAX2255* AVERROR code otherwise2256*/2257int av_probe_input_buffer2(AVIOContext *pb, const AVInputFormat **fmt,2258const char *url, void *logctx,2259unsigned int offset, unsigned int max_probe_size);22602261/**2262* Like av_probe_input_buffer2() but returns 0 on success2263*/2264int av_probe_input_buffer(AVIOContext *pb, const AVInputFormat **fmt,2265const char *url, void *logctx,2266unsigned int offset, unsigned int max_probe_size);22672268/**2269* Open an input stream and read the header. The codecs are not opened.2270* The stream must be closed with avformat_close_input().2271*2272* @param ps Pointer to user-supplied AVFormatContext (allocated by2273* avformat_alloc_context). May be a pointer to NULL, in2274* which case an AVFormatContext is allocated by this2275* function and written into ps.2276* Note that a user-supplied AVFormatContext will be freed2277* on failure.2278* @param url URL of the stream to open.2279* @param fmt If non-NULL, this parameter forces a specific input format.2280* Otherwise the format is autodetected.2281* @param options A dictionary filled with AVFormatContext and demuxer-private2282* options.2283* On return this parameter will be destroyed and replaced with2284* a dict containing options that were not found. May be NULL.2285*2286* @return 0 on success, a negative AVERROR on failure.2287*2288* @note If you want to use custom IO, preallocate the format context and set its pb field.2289*/2290int avformat_open_input(AVFormatContext **ps, const char *url,2291const AVInputFormat *fmt, AVDictionary **options);22922293/**2294* Read packets of a media file to get stream information. This2295* is useful for file formats with no headers such as MPEG. This2296* function also computes the real framerate in case of MPEG-2 repeat2297* frame mode.2298* The logical file position is not changed by this function;2299* examined packets may be buffered for later processing.2300*2301* @param ic media file handle2302* @param options If non-NULL, an ic.nb_streams long array of pointers to2303* dictionaries, where i-th member contains options for2304* codec corresponding to i-th stream.2305* On return each dictionary will be filled with options that were not found.2306* @return >=0 if OK, AVERROR_xxx on error2307*2308* @note this function isn't guaranteed to open all the codecs, so2309* options being non-empty at return is a perfectly normal behavior.2310*2311* @todo Let the user decide somehow what information is needed so that2312* we do not waste time getting stuff the user does not need.2313*/2314int avformat_find_stream_info(AVFormatContext *ic, AVDictionary **options);23152316/**2317* Find the programs which belong to a given stream.2318*2319* @param ic media file handle2320* @param last the last found program, the search will start after this2321* program, or from the beginning if it is NULL2322* @param s stream index2323*2324* @return the next program which belongs to s, NULL if no program is found or2325* the last program is not among the programs of ic.2326*/2327AVProgram *av_find_program_from_stream(AVFormatContext *ic, AVProgram *last, int s);23282329void av_program_add_stream_index(AVFormatContext *ac, int progid, unsigned int idx);23302331/**2332* Find the "best" stream in the file.2333* The best stream is determined according to various heuristics as the most2334* likely to be what the user expects.2335* If the decoder parameter is non-NULL, av_find_best_stream will find the2336* default decoder for the stream's codec; streams for which no decoder can2337* be found are ignored.2338*2339* @param ic media file handle2340* @param type stream type: video, audio, subtitles, etc.2341* @param wanted_stream_nb user-requested stream number,2342* or -1 for automatic selection2343* @param related_stream try to find a stream related (eg. in the same2344* program) to this one, or -1 if none2345* @param decoder_ret if non-NULL, returns the decoder for the2346* selected stream2347* @param flags flags; none are currently defined2348*2349* @return the non-negative stream number in case of success,2350* AVERROR_STREAM_NOT_FOUND if no stream with the requested type2351* could be found,2352* AVERROR_DECODER_NOT_FOUND if streams were found but no decoder2353*2354* @note If av_find_best_stream returns successfully and decoder_ret is not2355* NULL, then *decoder_ret is guaranteed to be set to a valid AVCodec.2356*/2357int av_find_best_stream(AVFormatContext *ic,2358enum AVMediaType type,2359int wanted_stream_nb,2360int related_stream,2361const struct AVCodec **decoder_ret,2362int flags);23632364/**2365* Return the next frame of a stream.2366* This function returns what is stored in the file, and does not validate2367* that what is there are valid frames for the decoder. It will split what is2368* stored in the file into frames and return one for each call. It will not2369* omit invalid data between valid frames so as to give the decoder the maximum2370* information possible for decoding.2371*2372* On success, the returned packet is reference-counted (pkt->buf is set) and2373* valid indefinitely. The packet must be freed with av_packet_unref() when2374* it is no longer needed. For video, the packet contains exactly one frame.2375* For audio, it contains an integer number of frames if each frame has2376* a known fixed size (e.g. PCM or ADPCM data). If the audio frames have2377* a variable size (e.g. MPEG audio), then it contains one frame.2378*2379* pkt->pts, pkt->dts and pkt->duration are always set to correct2380* values in AVStream.time_base units (and guessed if the format cannot2381* provide them). pkt->pts can be AV_NOPTS_VALUE if the video format2382* has B-frames, so it is better to rely on pkt->dts if you do not2383* decompress the payload.2384*2385* @return 0 if OK, < 0 on error or end of file. On error, pkt will be blank2386* (as if it came from av_packet_alloc()).2387*2388* @note pkt will be initialized, so it may be uninitialized, but it must not2389* contain data that needs to be freed.2390*/2391int av_read_frame(AVFormatContext *s, AVPacket *pkt);23922393/**2394* Seek to the keyframe at timestamp.2395* 'timestamp' in 'stream_index'.2396*2397* @param s media file handle2398* @param stream_index If stream_index is (-1), a default stream is selected,2399* and timestamp is automatically converted from2400* AV_TIME_BASE units to the stream specific time_base.2401* @param timestamp Timestamp in AVStream.time_base units or, if no stream2402* is specified, in AV_TIME_BASE units.2403* @param flags flags which select direction and seeking mode2404*2405* @return >= 0 on success2406*/2407int av_seek_frame(AVFormatContext *s, int stream_index, int64_t timestamp,2408int flags);24092410/**2411* Seek to timestamp ts.2412* Seeking will be done so that the point from which all active streams2413* can be presented successfully will be closest to ts and within min/max_ts.2414* Active streams are all streams that have AVStream.discard < AVDISCARD_ALL.2415*2416* If flags contain AVSEEK_FLAG_BYTE, then all timestamps are in bytes and2417* are the file position (this may not be supported by all demuxers).2418* If flags contain AVSEEK_FLAG_FRAME, then all timestamps are in frames2419* in the stream with stream_index (this may not be supported by all demuxers).2420* Otherwise all timestamps are in units of the stream selected by stream_index2421* or if stream_index is -1, in AV_TIME_BASE units.2422* If flags contain AVSEEK_FLAG_ANY, then non-keyframes are treated as2423* keyframes (this may not be supported by all demuxers).2424* If flags contain AVSEEK_FLAG_BACKWARD, it is ignored.2425*2426* @param s media file handle2427* @param stream_index index of the stream which is used as time base reference2428* @param min_ts smallest acceptable timestamp2429* @param ts target timestamp2430* @param max_ts largest acceptable timestamp2431* @param flags flags2432* @return >=0 on success, error code otherwise2433*2434* @note This is part of the new seek API which is still under construction.2435*/2436int avformat_seek_file(AVFormatContext *s, int stream_index, int64_t min_ts, int64_t ts, int64_t max_ts, int flags);24372438/**2439* Discard all internally buffered data. This can be useful when dealing with2440* discontinuities in the byte stream. Generally works only with formats that2441* can resync. This includes headerless formats like MPEG-TS/TS but should also2442* work with NUT, Ogg and in a limited way AVI for example.2443*2444* The set of streams, the detected duration, stream parameters and codecs do2445* not change when calling this function. If you want a complete reset, it's2446* better to open a new AVFormatContext.2447*2448* This does not flush the AVIOContext (s->pb). If necessary, call2449* avio_flush(s->pb) before calling this function.2450*2451* @param s media file handle2452* @return >=0 on success, error code otherwise2453*/2454int avformat_flush(AVFormatContext *s);24552456/**2457* Start playing a network-based stream (e.g. RTSP stream) at the2458* current position.2459*/2460int av_read_play(AVFormatContext *s);24612462/**2463* Pause a network-based stream (e.g. RTSP stream).2464*2465* Use av_read_play() to resume it.2466*/2467int av_read_pause(AVFormatContext *s);24682469/**2470* Close an opened input AVFormatContext. Free it and all its contents2471* and set *s to NULL.2472*/2473void avformat_close_input(AVFormatContext **s);2474/**2475* @}2476*/24772478#define AVSEEK_FLAG_BACKWARD 1 ///< seek backward2479#define AVSEEK_FLAG_BYTE 2 ///< seeking based on position in bytes2480#define AVSEEK_FLAG_ANY 4 ///< seek to any frame, even non-keyframes2481#define AVSEEK_FLAG_FRAME 8 ///< seeking based on frame number24822483/**2484* @addtogroup lavf_encoding2485* @{2486*/24872488#define AVSTREAM_INIT_IN_WRITE_HEADER 0 ///< stream parameters initialized in avformat_write_header2489#define AVSTREAM_INIT_IN_INIT_OUTPUT 1 ///< stream parameters initialized in avformat_init_output24902491/**2492* Allocate the stream private data and write the stream header to2493* an output media file.2494*2495* @param s Media file handle, must be allocated with2496* avformat_alloc_context().2497* Its \ref AVFormatContext.oformat "oformat" field must be set2498* to the desired output format;2499* Its \ref AVFormatContext.pb "pb" field must be set to an2500* already opened ::AVIOContext.2501* @param options An ::AVDictionary filled with AVFormatContext and2502* muxer-private options.2503* On return this parameter will be destroyed and replaced with2504* a dict containing options that were not found. May be NULL.2505*2506* @retval AVSTREAM_INIT_IN_WRITE_HEADER On success, if the codec had not already been2507* fully initialized in avformat_init_output().2508* @retval AVSTREAM_INIT_IN_INIT_OUTPUT On success, if the codec had already been fully2509* initialized in avformat_init_output().2510* @retval AVERROR A negative AVERROR on failure.2511*2512* @see av_opt_find, av_dict_set, avio_open, av_oformat_next, avformat_init_output.2513*/2514av_warn_unused_result2515int avformat_write_header(AVFormatContext *s, AVDictionary **options);25162517/**2518* Allocate the stream private data and initialize the codec, but do not write the header.2519* May optionally be used before avformat_write_header() to initialize stream parameters2520* before actually writing the header.2521* If using this function, do not pass the same options to avformat_write_header().2522*2523* @param s Media file handle, must be allocated with2524* avformat_alloc_context().2525* Its \ref AVFormatContext.oformat "oformat" field must be set2526* to the desired output format;2527* Its \ref AVFormatContext.pb "pb" field must be set to an2528* already opened ::AVIOContext.2529* @param options An ::AVDictionary filled with AVFormatContext and2530* muxer-private options.2531* On return this parameter will be destroyed and replaced with2532* a dict containing options that were not found. May be NULL.2533*2534* @retval AVSTREAM_INIT_IN_WRITE_HEADER On success, if the codec requires2535* avformat_write_header to fully initialize.2536* @retval AVSTREAM_INIT_IN_INIT_OUTPUT On success, if the codec has been fully2537* initialized.2538* @retval AVERROR Anegative AVERROR on failure.2539*2540* @see av_opt_find, av_dict_set, avio_open, av_oformat_next, avformat_write_header.2541*/2542av_warn_unused_result2543int avformat_init_output(AVFormatContext *s, AVDictionary **options);25442545/**2546* Write a packet to an output media file.2547*2548* This function passes the packet directly to the muxer, without any buffering2549* or reordering. The caller is responsible for correctly interleaving the2550* packets if the format requires it. Callers that want libavformat to handle2551* the interleaving should call av_interleaved_write_frame() instead of this2552* function.2553*2554* @param s media file handle2555* @param pkt The packet containing the data to be written. Note that unlike2556* av_interleaved_write_frame(), this function does not take2557* ownership of the packet passed to it (though some muxers may make2558* an internal reference to the input packet).2559* <br>2560* This parameter can be NULL (at any time, not just at the end), in2561* order to immediately flush data buffered within the muxer, for2562* muxers that buffer up data internally before writing it to the2563* output.2564* <br>2565* Packet's @ref AVPacket.stream_index "stream_index" field must be2566* set to the index of the corresponding stream in @ref2567* AVFormatContext.streams "s->streams".2568* <br>2569* The timestamps (@ref AVPacket.pts "pts", @ref AVPacket.dts "dts")2570* must be set to correct values in the stream's timebase (unless the2571* output format is flagged with the AVFMT_NOTIMESTAMPS flag, then2572* they can be set to AV_NOPTS_VALUE).2573* The dts for subsequent packets passed to this function must be strictly2574* increasing when compared in their respective timebases (unless the2575* output format is flagged with the AVFMT_TS_NONSTRICT, then they2576* merely have to be nondecreasing). @ref AVPacket.duration2577* "duration") should also be set if known.2578* @return < 0 on error, = 0 if OK, 1 if flushed and there is no more data to flush2579*2580* @see av_interleaved_write_frame()2581*/2582int av_write_frame(AVFormatContext *s, AVPacket *pkt);25832584/**2585* Write a packet to an output media file ensuring correct interleaving.2586*2587* This function will buffer the packets internally as needed to make sure the2588* packets in the output file are properly interleaved, usually ordered by2589* increasing dts. Callers doing their own interleaving should call2590* av_write_frame() instead of this function.2591*2592* Using this function instead of av_write_frame() can give muxers advance2593* knowledge of future packets, improving e.g. the behaviour of the mp42594* muxer for VFR content in fragmenting mode.2595*2596* @param s media file handle2597* @param pkt The packet containing the data to be written.2598* <br>2599* If the packet is reference-counted, this function will take2600* ownership of this reference and unreference it later when it sees2601* fit. If the packet is not reference-counted, libavformat will2602* make a copy.2603* The returned packet will be blank (as if returned from2604* av_packet_alloc()), even on error.2605* <br>2606* This parameter can be NULL (at any time, not just at the end), to2607* flush the interleaving queues.2608* <br>2609* Packet's @ref AVPacket.stream_index "stream_index" field must be2610* set to the index of the corresponding stream in @ref2611* AVFormatContext.streams "s->streams".2612* <br>2613* The timestamps (@ref AVPacket.pts "pts", @ref AVPacket.dts "dts")2614* must be set to correct values in the stream's timebase (unless the2615* output format is flagged with the AVFMT_NOTIMESTAMPS flag, then2616* they can be set to AV_NOPTS_VALUE).2617* The dts for subsequent packets in one stream must be strictly2618* increasing (unless the output format is flagged with the2619* AVFMT_TS_NONSTRICT, then they merely have to be nondecreasing).2620* @ref AVPacket.duration "duration" should also be set if known.2621*2622* @return 0 on success, a negative AVERROR on error.2623*2624* @see av_write_frame(), AVFormatContext.max_interleave_delta2625*/2626int av_interleaved_write_frame(AVFormatContext *s, AVPacket *pkt);26272628/**2629* Write an uncoded frame to an output media file.2630*2631* The frame must be correctly interleaved according to the container2632* specification; if not, av_interleaved_write_uncoded_frame() must be used.2633*2634* See av_interleaved_write_uncoded_frame() for details.2635*/2636int av_write_uncoded_frame(AVFormatContext *s, int stream_index,2637struct AVFrame *frame);26382639/**2640* Write an uncoded frame to an output media file.2641*2642* If the muxer supports it, this function makes it possible to write an AVFrame2643* structure directly, without encoding it into a packet.2644* It is mostly useful for devices and similar special muxers that use raw2645* video or PCM data and will not serialize it into a byte stream.2646*2647* To test whether it is possible to use it with a given muxer and stream,2648* use av_write_uncoded_frame_query().2649*2650* The caller gives up ownership of the frame and must not access it2651* afterwards.2652*2653* @return >=0 for success, a negative code on error2654*/2655int av_interleaved_write_uncoded_frame(AVFormatContext *s, int stream_index,2656struct AVFrame *frame);26572658/**2659* Test whether a muxer supports uncoded frame.2660*2661* @return >=0 if an uncoded frame can be written to that muxer and stream,2662* <0 if not2663*/2664int av_write_uncoded_frame_query(AVFormatContext *s, int stream_index);26652666/**2667* Write the stream trailer to an output media file and free the2668* file private data.2669*2670* May only be called after a successful call to avformat_write_header.2671*2672* @param s media file handle2673* @return 0 if OK, AVERROR_xxx on error2674*/2675int av_write_trailer(AVFormatContext *s);26762677/**2678* Return the output format in the list of registered output formats2679* which best matches the provided parameters, or return NULL if2680* there is no match.2681*2682* @param short_name if non-NULL checks if short_name matches with the2683* names of the registered formats2684* @param filename if non-NULL checks if filename terminates with the2685* extensions of the registered formats2686* @param mime_type if non-NULL checks if mime_type matches with the2687* MIME type of the registered formats2688*/2689const AVOutputFormat *av_guess_format(const char *short_name,2690const char *filename,2691const char *mime_type);26922693/**2694* Guess the codec ID based upon muxer and filename.2695*/2696enum AVCodecID av_guess_codec(const AVOutputFormat *fmt, const char *short_name,2697const char *filename, const char *mime_type,2698enum AVMediaType type);26992700/**2701* Get timing information for the data currently output.2702* The exact meaning of "currently output" depends on the format.2703* It is mostly relevant for devices that have an internal buffer and/or2704* work in real time.2705* @param s media file handle2706* @param stream stream in the media file2707* @param[out] dts DTS of the last packet output for the stream, in stream2708* time_base units2709* @param[out] wall absolute time when that packet whas output,2710* in microsecond2711* @retval 0 Success2712* @retval AVERROR(ENOSYS) The format does not support it2713*2714* @note Some formats or devices may not allow to measure dts and wall2715* atomically.2716*/2717int av_get_output_timestamp(struct AVFormatContext *s, int stream,2718int64_t *dts, int64_t *wall);271927202721/**2722* @}2723*/272427252726/**2727* @defgroup lavf_misc Utility functions2728* @ingroup libavf2729* @{2730*2731* Miscellaneous utility functions related to both muxing and demuxing2732* (or neither).2733*/27342735/**2736* Send a nice hexadecimal dump of a buffer to the specified file stream.2737*2738* @param f The file stream pointer where the dump should be sent to.2739* @param buf buffer2740* @param size buffer size2741*2742* @see av_hex_dump_log, av_pkt_dump2, av_pkt_dump_log22743*/2744void av_hex_dump(FILE *f, const uint8_t *buf, int size);27452746/**2747* Send a nice hexadecimal dump of a buffer to the log.2748*2749* @param avcl A pointer to an arbitrary struct of which the first field is a2750* pointer to an AVClass struct.2751* @param level The importance level of the message, lower values signifying2752* higher importance.2753* @param buf buffer2754* @param size buffer size2755*2756* @see av_hex_dump, av_pkt_dump2, av_pkt_dump_log22757*/2758void av_hex_dump_log(void *avcl, int level, const uint8_t *buf, int size);27592760/**2761* Send a nice dump of a packet to the specified file stream.2762*2763* @param f The file stream pointer where the dump should be sent to.2764* @param pkt packet to dump2765* @param dump_payload True if the payload must be displayed, too.2766* @param st AVStream that the packet belongs to2767*/2768void av_pkt_dump2(FILE *f, const AVPacket *pkt, int dump_payload, const AVStream *st);276927702771/**2772* Send a nice dump of a packet to the log.2773*2774* @param avcl A pointer to an arbitrary struct of which the first field is a2775* pointer to an AVClass struct.2776* @param level The importance level of the message, lower values signifying2777* higher importance.2778* @param pkt packet to dump2779* @param dump_payload True if the payload must be displayed, too.2780* @param st AVStream that the packet belongs to2781*/2782void av_pkt_dump_log2(void *avcl, int level, const AVPacket *pkt, int dump_payload,2783const AVStream *st);27842785/**2786* Get the AVCodecID for the given codec tag tag.2787* If no codec id is found returns AV_CODEC_ID_NONE.2788*2789* @param tags list of supported codec_id-codec_tag pairs, as stored2790* in AVInputFormat.codec_tag and AVOutputFormat.codec_tag2791* @param tag codec tag to match to a codec ID2792*/2793enum AVCodecID av_codec_get_id(const struct AVCodecTag * const *tags, unsigned int tag);27942795/**2796* Get the codec tag for the given codec id id.2797* If no codec tag is found returns 0.2798*2799* @param tags list of supported codec_id-codec_tag pairs, as stored2800* in AVInputFormat.codec_tag and AVOutputFormat.codec_tag2801* @param id codec ID to match to a codec tag2802*/2803unsigned int av_codec_get_tag(const struct AVCodecTag * const *tags, enum AVCodecID id);28042805/**2806* Get the codec tag for the given codec id.2807*2808* @param tags list of supported codec_id - codec_tag pairs, as stored2809* in AVInputFormat.codec_tag and AVOutputFormat.codec_tag2810* @param id codec id that should be searched for in the list2811* @param tag A pointer to the found tag2812* @return 0 if id was not found in tags, > 0 if it was found2813*/2814int av_codec_get_tag2(const struct AVCodecTag * const *tags, enum AVCodecID id,2815unsigned int *tag);28162817int av_find_default_stream_index(AVFormatContext *s);28182819/**2820* Get the index for a specific timestamp.2821*2822* @param st stream that the timestamp belongs to2823* @param timestamp timestamp to retrieve the index for2824* @param flags if AVSEEK_FLAG_BACKWARD then the returned index will correspond2825* to the timestamp which is <= the requested one, if backward2826* is 0, then it will be >=2827* if AVSEEK_FLAG_ANY seek to any frame, only keyframes otherwise2828* @return < 0 if no such timestamp could be found2829*/2830int av_index_search_timestamp(AVStream *st, int64_t timestamp, int flags);28312832/**2833* Get the index entry count for the given AVStream.2834*2835* @param st stream2836* @return the number of index entries in the stream2837*/2838int avformat_index_get_entries_count(const AVStream *st);28392840/**2841* Get the AVIndexEntry corresponding to the given index.2842*2843* @param st Stream containing the requested AVIndexEntry.2844* @param idx The desired index.2845* @return A pointer to the requested AVIndexEntry if it exists, NULL otherwise.2846*2847* @note The pointer returned by this function is only guaranteed to be valid2848* until any function that takes the stream or the parent AVFormatContext2849* as input argument is called.2850*/2851const AVIndexEntry *avformat_index_get_entry(AVStream *st, int idx);28522853/**2854* Get the AVIndexEntry corresponding to the given timestamp.2855*2856* @param st Stream containing the requested AVIndexEntry.2857* @param wanted_timestamp Timestamp to retrieve the index entry for.2858* @param flags If AVSEEK_FLAG_BACKWARD then the returned entry will correspond2859* to the timestamp which is <= the requested one, if backward2860* is 0, then it will be >=2861* if AVSEEK_FLAG_ANY seek to any frame, only keyframes otherwise.2862* @return A pointer to the requested AVIndexEntry if it exists, NULL otherwise.2863*2864* @note The pointer returned by this function is only guaranteed to be valid2865* until any function that takes the stream or the parent AVFormatContext2866* as input argument is called.2867*/2868const AVIndexEntry *avformat_index_get_entry_from_timestamp(AVStream *st,2869int64_t wanted_timestamp,2870int flags);2871/**2872* Add an index entry into a sorted list. Update the entry if the list2873* already contains it.2874*2875* @param timestamp timestamp in the time base of the given stream2876*/2877int av_add_index_entry(AVStream *st, int64_t pos, int64_t timestamp,2878int size, int distance, int flags);287928802881/**2882* Split a URL string into components.2883*2884* The pointers to buffers for storing individual components may be null,2885* in order to ignore that component. Buffers for components not found are2886* set to empty strings. If the port is not found, it is set to a negative2887* value.2888*2889* @param proto the buffer for the protocol2890* @param proto_size the size of the proto buffer2891* @param authorization the buffer for the authorization2892* @param authorization_size the size of the authorization buffer2893* @param hostname the buffer for the host name2894* @param hostname_size the size of the hostname buffer2895* @param port_ptr a pointer to store the port number in2896* @param path the buffer for the path2897* @param path_size the size of the path buffer2898* @param url the URL to split2899*/2900void av_url_split(char *proto, int proto_size,2901char *authorization, int authorization_size,2902char *hostname, int hostname_size,2903int *port_ptr,2904char *path, int path_size,2905const char *url);290629072908/**2909* Print detailed information about the input or output format, such as2910* duration, bitrate, streams, container, programs, metadata, side data,2911* codec and time base.2912*2913* @param ic the context to analyze2914* @param index index of the stream to dump information about2915* @param url the URL to print, such as source or destination file2916* @param is_output Select whether the specified context is an input(0) or output(1)2917*/2918void av_dump_format(AVFormatContext *ic,2919int index,2920const char *url,2921int is_output);292229232924#define AV_FRAME_FILENAME_FLAGS_MULTIPLE 1 ///< Allow multiple %d29252926/**2927* Return in 'buf' the path with '%d' replaced by a number.2928*2929* Also handles the '%0nd' format where 'n' is the total number2930* of digits and '%%'.2931*2932* @param buf destination buffer2933* @param buf_size destination buffer size2934* @param path numbered sequence string2935* @param number frame number2936* @param flags AV_FRAME_FILENAME_FLAGS_*2937* @return 0 if OK, -1 on format error2938*/2939int av_get_frame_filename2(char *buf, int buf_size,2940const char *path, int number, int flags);29412942int av_get_frame_filename(char *buf, int buf_size,2943const char *path, int number);29442945/**2946* Check whether filename actually is a numbered sequence generator.2947*2948* @param filename possible numbered sequence string2949* @return 1 if a valid numbered sequence string, 0 otherwise2950*/2951int av_filename_number_test(const char *filename);29522953/**2954* Generate an SDP for an RTP session.2955*2956* Note, this overwrites the id values of AVStreams in the muxer contexts2957* for getting unique dynamic payload types.2958*2959* @param ac array of AVFormatContexts describing the RTP streams. If the2960* array is composed by only one context, such context can contain2961* multiple AVStreams (one AVStream per RTP stream). Otherwise,2962* all the contexts in the array (an AVCodecContext per RTP stream)2963* must contain only one AVStream.2964* @param n_files number of AVCodecContexts contained in ac2965* @param buf buffer where the SDP will be stored (must be allocated by2966* the caller)2967* @param size the size of the buffer2968* @return 0 if OK, AVERROR_xxx on error2969*/2970int av_sdp_create(AVFormatContext *ac[], int n_files, char *buf, int size);29712972/**2973* Return a positive value if the given filename has one of the given2974* extensions, 0 otherwise.2975*2976* @param filename file name to check against the given extensions2977* @param extensions a comma-separated list of filename extensions2978*/2979int av_match_ext(const char *filename, const char *extensions);29802981/**2982* Test if the given container can store a codec.2983*2984* @param ofmt container to check for compatibility2985* @param codec_id codec to potentially store in container2986* @param std_compliance standards compliance level, one of FF_COMPLIANCE_*2987*2988* @return 1 if codec with ID codec_id can be stored in ofmt, 0 if it cannot.2989* A negative number if this information is not available.2990*/2991int avformat_query_codec(const AVOutputFormat *ofmt, enum AVCodecID codec_id,2992int std_compliance);29932994/**2995* @defgroup riff_fourcc RIFF FourCCs2996* @{2997* Get the tables mapping RIFF FourCCs to libavcodec AVCodecIDs. The tables are2998* meant to be passed to av_codec_get_id()/av_codec_get_tag() as in the2999* following code:3000* @code3001* uint32_t tag = MKTAG('H', '2', '6', '4');3002* const struct AVCodecTag *table[] = { avformat_get_riff_video_tags(), 0 };3003* enum AVCodecID id = av_codec_get_id(table, tag);3004* @endcode3005*/3006/**3007* @return the table mapping RIFF FourCCs for video to libavcodec AVCodecID.3008*/3009const struct AVCodecTag *avformat_get_riff_video_tags(void);3010/**3011* @return the table mapping RIFF FourCCs for audio to AVCodecID.3012*/3013const struct AVCodecTag *avformat_get_riff_audio_tags(void);3014/**3015* @return the table mapping MOV FourCCs for video to libavcodec AVCodecID.3016*/3017const struct AVCodecTag *avformat_get_mov_video_tags(void);3018/**3019* @return the table mapping MOV FourCCs for audio to AVCodecID.3020*/3021const struct AVCodecTag *avformat_get_mov_audio_tags(void);30223023/**3024* @}3025*/30263027/**3028* Guess the sample aspect ratio of a frame, based on both the stream and the3029* frame aspect ratio.3030*3031* Since the frame aspect ratio is set by the codec but the stream aspect ratio3032* is set by the demuxer, these two may not be equal. This function tries to3033* return the value that you should use if you would like to display the frame.3034*3035* Basic logic is to use the stream aspect ratio if it is set to something sane3036* otherwise use the frame aspect ratio. This way a container setting, which is3037* usually easy to modify can override the coded value in the frames.3038*3039* @param format the format context which the stream is part of3040* @param stream the stream which the frame is part of3041* @param frame the frame with the aspect ratio to be determined3042* @return the guessed (valid) sample_aspect_ratio, 0/1 if no idea3043*/3044AVRational av_guess_sample_aspect_ratio(AVFormatContext *format, AVStream *stream,3045struct AVFrame *frame);30463047/**3048* Guess the frame rate, based on both the container and codec information.3049*3050* @param ctx the format context which the stream is part of3051* @param stream the stream which the frame is part of3052* @param frame the frame for which the frame rate should be determined, may be NULL3053* @return the guessed (valid) frame rate, 0/1 if no idea3054*/3055AVRational av_guess_frame_rate(AVFormatContext *ctx, AVStream *stream,3056struct AVFrame *frame);30573058/**3059* Check if the stream st contained in s is matched by the stream specifier3060* spec.3061*3062* See the "stream specifiers" chapter in the documentation for the syntax3063* of spec.3064*3065* @return >0 if st is matched by spec;3066* 0 if st is not matched by spec;3067* AVERROR code if spec is invalid3068*3069* @note A stream specifier can match several streams in the format.3070*/3071int avformat_match_stream_specifier(AVFormatContext *s, AVStream *st,3072const char *spec);30733074int avformat_queue_attached_pictures(AVFormatContext *s);30753076#if FF_API_INTERNAL_TIMING3077enum AVTimebaseSource {3078AVFMT_TBCF_AUTO = -1,3079AVFMT_TBCF_DECODER,3080AVFMT_TBCF_DEMUXER,3081#if FF_API_R_FRAME_RATE3082AVFMT_TBCF_R_FRAMERATE,3083#endif3084};30853086/**3087* @deprecated do not call this function3088*/3089attribute_deprecated3090int avformat_transfer_internal_stream_timing_info(const AVOutputFormat *ofmt,3091AVStream *ost, const AVStream *ist,3092enum AVTimebaseSource copy_tb);30933094/**3095* @deprecated do not call this function3096*/3097attribute_deprecated3098AVRational av_stream_get_codec_timebase(const AVStream *st);3099#endif310031013102/**3103* @}3104*/31053106#endif /* AVFORMAT_AVFORMAT_H */310731083109