ffmpeg
-sameq option removed? What to use instead?make fate
not running all tests?make fate
not finding the samples?Because no one has taken on that task yet. FFmpeg development is driven by the tasks that are important to the individual developers. If there is a feature that is important to you, the best way to get it implemented is to undertake the task yourself or sponsor a developer.
No. Windows DLLs are not portable, bloated and often slow. Moreover FFmpeg strives to support all codecs natively. A DLL loader is not conducive to that goal.
Even if ffmpeg can read the container format, it may not support all its codecs. Please consult the supported codec list in the ffmpeg documentation.
Windows does not support standard formats like MPEG very well, unless you install some additional codecs.
The following list of video codecs should work on most Windows systems:
.avi/.asf
.asf only
.asf only
.asf only
Only if you have some MPEG-4 codec like ffdshow or Xvid installed.
.mpg only
Note, ASF files often have .wmv or .wma extensions in Windows. It should also be mentioned that Microsoft claims a patent on the ASF format, and may sue or threaten users who create ASF files with non-Microsoft software. It is strongly advised to avoid ASF where possible.
The following list of audio codecs should work on most Windows systems:
always
If some MP3 codec like LAME is installed.
error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'
This is a bug in gcc. Do not report it to us. Instead, please report it to the gcc developers. Note that we will not add workarounds for gcc bugs.
Also note that (some of) the gcc developers believe this is not a bug or not a bug they should fix: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203. Then again, some of them do not know the difference between an undecidable problem and an NP-hard problem...
configure
not see it? Distributions usually split libraries in several packages. The main package contains the files necessary to run programs using the library. The development package contains the files necessary to build programs using the library. Sometimes, docs and/or data are in a separate package too.
To build FFmpeg, you need to install the development package. It is usually called libfoo-dev or libfoo-devel. You can remove it after the build is finished, but be sure to keep the main package.
pkg-config
find my libraries? Somewhere along with your libraries, there is a .pc file (or several)
in a pkgconfig directory. You need to set environment variables to
point pkg-config
to these files.
If you need to add directories to pkg-config
’s search list
(typical use case: library installed separately), add it to
$PKG_CONFIG_PATH
:
export PKG_CONFIG_PATH=/opt/x264/lib/pkgconfig:/opt/opus/lib/pkgconfig
If you need to replace pkg-config
’s search list
(typical use case: cross-compiling), set it in
$PKG_CONFIG_LIBDIR
:
export PKG_CONFIG_LIBDIR=/home/me/cross/usr/lib/pkgconfig:/home/me/cross/usr/local/lib/pkgconfig
If you need to know the library’s internal dependencies (typical use: static
linking), add the --static
option to pkg-config
:
./configure --pkg-config-flags=--static
pkg-config
when cross-compiling? The best way is to install pkg-config
in your cross-compilation
environment. It will automatically use the cross-compilation libraries.
You can also use pkg-config
from the host environment by
specifying explicitly --pkg-config=pkg-config
to configure
.
In that case, you must point pkg-config
to the correct directories
using the PKG_CONFIG_LIBDIR
, as explained in the previous entry.
As an intermediate solution, you can place in your cross-compilation
environment a script that calls the host pkg-config
with
PKG_CONFIG_LIBDIR
set. That script can look like that:
#!/bin/sh PKG_CONFIG_LIBDIR=/path/to/cross/lib/pkgconfig export PKG_CONFIG_LIBDIR exec /usr/bin/pkg-config "$@"
Try a make distclean
in the ffmpeg source directory before the build.
If this does not help see
(http://ffmpeg.org/bugreports.html).
First, rename your pictures to follow a numerical sequence. For example, img1.jpg, img2.jpg, img3.jpg,... Then you may run:
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
Notice that ‘%d’ is replaced by the image number.
img%03d.jpg means the sequence img001.jpg, img002.jpg, etc.
Use the -start_number option to declare a starting number for the sequence. This is useful if your sequence does not start with img001.jpg but is still in a numerical order. The following example will start with img100.jpg:
ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg
If you have large number of pictures to rename, you can use the
following command to ease the burden. The command, using the bourne
shell syntax, symbolically links all files in the current directory
that match *jpg
to the /tmp directory in the sequence of
img001.jpg, img002.jpg and so on.
x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
If you want to sequence them by oldest modified first, substitute
$(ls -r -t *jpg)
in place of *jpg
.
Then run:
ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
The same logic is used for any image format that ffmpeg reads.
You can also use cat
to pipe images to ffmpeg:
cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
Use:
ffmpeg -i movie.mpg movie%d.jpg
The movie.mpg used as input will be converted to movie1.jpg, movie2.jpg, etc...
Instead of relying on file format self-recognition, you may also use
to force the encoding.
Applying that to the previous example:
ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg
Beware that there is no "jpeg" codec. Use "mjpeg" instead.
For multithreaded MPEG* encoding, the encoded slices must be independent, otherwise thread n would practically have to wait for n-1 to finish, so it’s quite logical that there is a small reduction of quality. This is not a bug.
Use - as file name.
Try ’-f image2 test%d.jpg’.
Some codecs, like MPEG-1/2, only allow a small number of fixed frame rates. Choose a different codec with the -c:v command line option.
Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4 standard (note that there are many other coding formats that use this same standard). Thus, use ’-c:v mpeg4’ to encode in these formats. The default fourcc stored in an MPEG-4-coded file will be ’FMP4’. If you want a different fourcc, use the ’-vtag’ option. E.g., ’-vtag xvid’ will force the fourcc ’xvid’ to be stored as the video fourcc rather than the default.
’-mbd rd -flags +mv4+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2’, things to try: ’-bf 2’, ’-flags qprd’, ’-flags mv0’, ’-flags skiprd’.
’-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2’ but beware the ’-g 100’ might cause problems with some decoders. Things to try: ’-bf 2’, ’-flags qprd’, ’-flags mv0’, ’-flags skiprd.
You should use ’-flags +ilme+ildct’ and maybe ’-flags +alt’ for interlaced material, and try ’-top 0/1’ if the result looks really messed-up.
If you have built FFmpeg with ./configure --enable-avisynth
(only possible on MinGW/Cygwin platforms),
then you may use any file that DirectShow can read as input.
Just create an "input.avs" text file with this single line ...
DirectShowSource("C:\path to your file\yourfile.asf")
... and then feed that text file to ffmpeg:
ffmpeg -i input.avs
For ANY other help on AviSynth, please visit the AviSynth homepage.
To "join" video files is quite ambiguous. The following list explains the different kinds of "joining" and points out how those are addressed in FFmpeg. To join video files may mean:
amerge
filter.
pan
filter to mix
the channels at will.
overlay
video filter.
There are several solutions, depending on the exact circumstances.
FFmpeg has a concat
filter designed specifically for that, with examples in the
documentation. This operation is recommended if you need to re-encode.
FFmpeg has a concat
demuxer which you can use when you want to avoid a re-encode and
your format doesn’t support file level concatenation.
FFmpeg has a concat
protocol designed specifically for that, with examples in the
documentation.
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate video by merely concatenating the files containing them.
Hence you may concatenate your multimedia files by first transcoding them to
these privileged formats, then using the humble cat
command (or the
equally humble copy
under Windows), and finally transcoding back to your
format of choice.
ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
Additionally, you can use the concat
protocol instead of cat
or
copy
which will avoid creation of a potentially huge intermediate file.
ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg ffmpeg -i concat:"intermediate1.mpg|intermediate2.mpg" -c copy intermediate_all.mpg ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
Note that you may need to escape the character "|" which is special for many shells.
Another option is usage of named pipes, should your platform support it:
mkfifo intermediate1.mpg mkfifo intermediate2.mpg ffmpeg -i input1.avi -qscale:v 1 -y intermediate1.mpg < /dev/null & ffmpeg -i input2.avi -qscale:v 1 -y intermediate2.mpg < /dev/null & cat intermediate1.mpg intermediate2.mpg |\ ffmpeg -f mpeg -i - -c:v mpeg4 -acodec libmp3lame output.avi
Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also
allow concatenation, and the transcoding step is almost lossless.
When using multiple yuv4mpegpipe(s), the first line needs to be discarded
from all but the first stream. This can be accomplished by piping through
tail
as seen below. Note that when piping through tail
you
must use command grouping, { ;}
, to background properly.
For example, let’s say we want to concatenate two FLV files into an output.flv file:
mkfifo temp1.a mkfifo temp1.v mkfifo temp2.a mkfifo temp2.v mkfifo all.a mkfifo all.v ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null & ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null & ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null & { ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; } & cat temp1.a temp2.a > all.a & cat temp1.v temp2.v > all.v & ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \ -f yuv4mpegpipe -i all.v \ -y output.flv rm temp[12].[av] all.[av]
Use -dumpgraph - to find out exactly where the channel layout is lost.
Most likely, it is through auto-inserted aresample
. Try to understand
why the converting filter was needed at that place.
Just before the output is a likely place, as -f lavfi currently only support packed S16.
Then insert the correct aformat
explicitly in the filtergraph,
specifying the exact format.
aformat=sample_fmts=s16:channel_layouts=stereo
VOB and a few other formats do not have a global header that describes everything present in the file. Instead, applications are supposed to scan the file to see what it contains. Since VOB files are frequently large, only the beginning is scanned. If the subtitles happen only later in the file, they will not be initially detected.
Some applications, including the ffmpeg
command-line tool, can only
work with streams that were detected during the initial scan; streams that
are detected later are ignored.
The size of the initial scan is controlled by two options: probesize
(default ~5 Mo) and analyzeduration
(default 5,000,000 µs = 5 s). For
the subtitle stream to be detected, both values must be large enough.
ffmpeg
-sameq option removed? What to use instead? The -sameq option meant "same quantizer", and made sense only in a very limited set of cases. Unfortunately, a lot of people mistook it for "same quality" and used it in places where it did not make sense: it had roughly the expected visible effect, but achieved it in a very inefficient way.
Each encoder has its own set of options to set the quality-vs-size balance, use the options for the encoder you are using to set the quality level to a point acceptable for your tastes. The most common options to do that are -qscale and -qmax, but you should peruse the documentation of the encoder you chose.
A lot of video codecs and formats can store the aspect ratio of the video: this is the ratio between the width and the height of either the full image (DAR, display aspect ratio) or individual pixels (SAR, sample aspect ratio). For example, EGA screens at resolution 640×350 had 4:3 DAR and 35:48 SAR.
Most still image processing work with square pixels, i.e. 1:1 SAR, but a lot of video standards, especially from the analogic-numeric transition era, use non-square pixels.
Most processing filters in FFmpeg handle the aspect ratio to avoid stretching the image: cropping adjusts the DAR to keep the SAR constant, scaling adjusts the SAR to keep the DAR constant.
If you want to stretch, or “unstretch”, the image, you need to override the
information with the
setdar or setsar filters
.
Do not forget to examine carefully the original video to check whether the stretching comes from the image or from the aspect ratio information.
For example, to fix a badly encoded EGA capture, use the following commands, either the first one to upscale to square pixels or the second one to set the correct aspect ratio or the third one to avoid transcoding (may not work depending on the format / codec / player / phase of the moon):
ffmpeg -i ega_screen.nut -vf scale=640:480,setsar=1 ega_screen_scaled.nut ffmpeg -i ega_screen.nut -vf setdar=4/3 ega_screen_anamorphic.nut ffmpeg -i ega_screen.nut -aspect 4/3 -c copy ega_screen_overridden.nut
Yes. Check the doc/examples directory in the source repository, also available online at: https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples.
Examples are also installed by default, usually in
$PREFIX/share/ffmpeg/examples
.
Also you may read the Developers Guide of the FFmpeg documentation. Alternatively, examine the source code for one of the many open source projects that already incorporate FFmpeg at (projects.html).
It depends. If your compiler is C99-compliant, then patches to support
it are likely to be welcome if they do not pollute the source code
with #ifdef
s related to the compiler.
Yes. Please see the Microsoft Visual C++ section in the FFmpeg documentation.
No. These tools are too bloated and they complicate the build.
FFmpeg is already organized in a highly modular manner and does not need to be rewritten in a formal object language. Further, many of the developers favor straight C; it works for them. For more arguments on this matter, read "Programming Religion".
The build process creates ffmpeg_g
, ffplay_g
, etc. which
contain full debug information. Those binaries are stripped to create
ffmpeg
, ffplay
, etc. If you need the debug information, use
the *_g versions.
Yes, as long as the code is optional and can easily and cleanly be placed under #if CONFIG_GPL without breaking anything. So, for example, a new codec or filter would be OK under GPL while a bug fix to LGPL code would not.
FFmpeg builds static libraries by default. In static libraries, dependencies
are not handled. That has two consequences. First, you must specify the
libraries in dependency order: -lavdevice
must come before
-lavformat
, -lavutil
must come after everything else, etc.
Second, external libraries that are used in FFmpeg have to be specified too.
An easy way to get the full list of required libraries in dependency order
is to use pkg-config
.
c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec)
See doc/example/Makefile and doc/example/pc-uninstalled for more details.
FFmpeg is a pure C project, so to use the libraries within your C++ application
you need to explicitly state that you are using a C library. You can do this by
encompassing your FFmpeg includes using extern "C"
.
See http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3
FFmpeg is a pure C project using C99 math features, in order to enable C++ to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS
You have to create a custom AVIOContext using avio_alloc_context
,
see libavformat/aviobuf.c in FFmpeg and libmpdemux/demux_lavf.c in MPlayer or MPlayer2 sources.
see http://www.ffmpeg.org/~michael/
Even if peculiar since it is network oriented, RTP is a container like any other. You have to demux RTP before feeding the payload to libavcodec. In this specific case please look at RFC 4629 to see how it should be done.
r_frame_rate
is NOT the average frame rate, it is the smallest frame rate
that can accurately represent all timestamps. So no, it is not
wrong if it is larger than the average!
For example, if you have mixed 25 and 30 fps content, then r_frame_rate
will be 150 (it is the least common multiple).
If you are looking for the average frame rate, see AVStream.avg_frame_rate
.
make fate
not running all tests? Make sure you have the fate-suite samples and the SAMPLES
Make variable
or FATE_SAMPLES
environment variable or the --samples
configure
option is set to the right path.
make fate
not finding the samples? Do you happen to have a ~
character in the samples path to indicate a
home directory? The value is used in ways where the shell cannot expand it,
causing FATE to not find files. Just replace ~
by the full path.
This document was generated using makeinfo.