FFmpeg FAQ
Like most open source projects FFmpeg suffers from a certain lack of manpower. For this reason the developers have to prioritize the work they do and putting out releases is not at the top of the list, fixing bugs and reviewing patches takes precedence. Please don't complain or request more timely and/or frequent releases unless you are willing to help out creating them.
Nowhere. Upgrade to the latest release or if there is no recent release upgrade to Subversion HEAD. You could also try to report it. Maybe you will get lucky and become the first person in history to get an answer different from "upgrade to Subversion HEAD".
Because no one has taken on that task yet. FFmpeg development is driven by the tasks that are important to the individual developers. If there is a feature that is important to you, the best way to get it implemented is to undertake the task yourself or sponsor a developer.
No. Windows DLLs are not portable, bloated and often slow. Moreover FFmpeg strives to support all codecs natively. A DLL loader is not conducive to that goal.
Likely reasons
You may view our mailing lists with a more forum-alike look here: http://dir.gmane.org/gmane.comp.video.ffmpeg.user, but, if you post, please remember that our mailing list rules still apply there.
Even if ffmpeg can read the container format, it may not support all its codecs. Please consult the supported codec list in the ffmpeg documentation.
Windows does not support standard formats like MPEG very well, unless you install some additional codecs. The following list of video codecs should work on most Windows systems:
Note, ASF files often have .wmv or .wma extensions in Windows. It should also be mentioned that Microsoft claims a patent on the ASF format, and may sue or threaten users who create ASF files with non-Microsoft software. It is strongly advised to avoid ASF where possible. The following list of audio codecs should work on most Windows systems:
error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'
This is a bug in gcc. Do not report it to us. Instead, please report it to the gcc developers. Note that we will not add workarounds for gcc bugs. Also note that (some of) the gcc developers believe this is not a bug or not a bug they should fix: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203. Then again, some of them do not know the difference between an undecidable problem and an NP-hard problem...
Try a make distclean
in the ffmpeg source directory before the build. If this does not help see
(http://ffmpeg.org/bugreports.html).
First, rename your pictures to follow a numerical sequence. For example, img1.jpg, img2.jpg, img3.jpg,... Then you may run:
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
Notice that `%d' is replaced by the image number. `img%03d.jpg' means the sequence `img001.jpg', `img002.jpg', etc... The same logic is used for any image format that ffmpeg reads.
Use:
ffmpeg -i movie.mpg movie%d.jpg
The `movie.mpg' used as input will be converted to `movie1.jpg', `movie2.jpg', etc... Instead of relying on file format self-recognition, you may also use
to force the encoding. Applying that to the previous example:
ffmpeg -i movie.mpg -f image2 -vcodec mjpeg menu%d.jpg
Beware that there is no "jpeg" codec. Use "mjpeg" instead.
For multithreaded MPEG* encoding, the encoded slices must be independent, otherwise thread n would practically have to wait for n-1 to finish, so it's quite logical that there is a small reduction of quality. This is not a bug.
Use `-' as file name.
The audio is AC-3 (a.k.a. A/52). AC-3 decoding is an optional component in FFmpeg
as the component that handles AC-3 decoding is currently released under the GPL.
Enable AC-3 decoding with ./configure --enable-gpl
. Take care: By
enabling AC-3, you automatically change the license of libavcodec from
LGPL to GPL.
This is a well-known bug in the bt8x8 driver. For 2.4.26 there is a patch at (http://svn.ffmpeg.org/michael/trunk/patches/bttv-420-2.4.26.patch?view=co). This may also apply cleanly to other 2.4-series kernels.
Pass 'combfilter=1 lumafilter=1' to the bttv driver. Note though that 'combfilter=1' will cause somewhat too strong filtering. A fix is to apply (http://svn.ffmpeg.org/michael/trunk/patches/bttv-comb-2.4.26.patch?view=co) or (http://svn.ffmpeg.org/michael/trunk/patches/bttv-comb-2.6.6.patch?view=co) and pass 'combfilter=2'.
Try '-f image2 test%d.jpg'.
Some codecs, like MPEG-1/2, only allow a small number of fixed framerates. Choose a different codec with the -vcodec command line option.
Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4 standard (note that there are many other coding formats that use this same standard). Thus, use '-vcodec mpeg4' to encode in these formats. The default fourcc stored in an MPEG-4-coded file will be 'FMP4'. If you want a different fourcc, use the '-vtag' option. E.g., '-vtag xvid' will force the fourcc 'xvid' to be stored as the video fourcc rather than the default.
'-mbd rd -flags +4mv+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2', things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd'.
'-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2' but beware the '-g 100' might cause problems with some decoders. Things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd.
You should use '-flags +ilme+ildct' and maybe '-flags +alt' for interlaced material, and try '-top 0/1' if the result looks really messed-up.
If you have built FFmpeg with ./configure --enable-avisynth
(only possible on MinGW/Cygwin platforms),
then you may use any file that DirectShow can read as input.
(Be aware that this feature has been recently added,
so you will need to help yourself in case of problems.)
Just create an "input.avs" text file with this single line ...
DirectShowSource("C:\path to your file\yourfile.asf")
... and then feed that text file to FFmpeg:
ffmpeg -i input.avs
For ANY other help on Avisynth, please visit http://www.avisynth.org/.
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to join video files by
merely concatenating them.
Hence you may concatenate your multimedia files by first transcoding them to
these privileged formats, then using the humble cat
command (or the
equally humble copy
under Windows), and finally transcoding back to your
format of choice.
ffmpeg -i input1.avi -sameq intermediate1.mpg ffmpeg -i input2.avi -sameq intermediate2.mpg cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg ffmpeg -i intermediate_all.mpg -sameq output.avi
Notice that you should either use -sameq
or set a reasonably high
bitrate for your intermediate and output files, if you want to preserve
video quality.
Also notice that you may avoid the huge intermediate files by taking advantage
of named pipes, should your platform support it:
mkfifo intermediate1.mpg mkfifo intermediate2.mpg ffmpeg -i input1.avi -sameq -y intermediate1.mpg < /dev/null & ffmpeg -i input2.avi -sameq -y intermediate2.mpg < /dev/null & cat intermediate1.mpg intermediate2.mpg |\ ffmpeg -f mpeg -i - -sameq -vcodec mpeg4 -acodec libmp3lame output.avi
Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also allow concatenation, and the transcoding step is almost lossless. For example, let's say we want to join two FLV files into an output.flv file:
mkfifo temp1.a mkfifo temp1.v mkfifo temp2.a mkfifo temp2.v mkfifo all.a mkfifo all.v ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null & ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null & ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null & ffmpeg -i input2.flv -an -f yuv4mpegpipe - > temp2.v < /dev/null & cat temp1.a temp2.a > all.a & cat temp1.v temp2.v > all.v & ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \ -f yuv4mpegpipe -i all.v \ -sameq -y output.flv rm temp[12].[av] all.[av]
Read the MPEG spec about video buffer verifier.
You do not understand what CBR is, please read the MPEG spec. Read about video buffer verifier and constant bitrate. The one sentence summary is that there is a buffer and the input rate is constant, the output can vary as needed.
To quote the MPEG-2 spec: "There is no way to tell that a bitstream is constant bitrate without examining all of the vbv_delay values and making complicated computations."
Yes. Read the Developers Guide of the FFmpeg documentation. Alternatively, examine the source code for one of the many open source projects that already incorporate FFmpeg at (projects.html).
It depends. If your compiler is C99-compliant, then patches to support
it are likely to be welcome if they do not pollute the source code
with #ifdef
s related to the compiler.
No. Microsoft Visual C++ is not compliant to the C99 standard and does not - among other things - support the inline assembly used in FFmpeg. If you wish to use MSVC++ for your project then you can link the MSVC++ code with libav* as long as you compile the latter with a working C compiler. For more information, see the Microsoft Visual C++ compatibility section in the FFmpeg documentation. There have been efforts to make FFmpeg compatible with MSVC++ in the past. However, they have all been rejected as too intrusive, especially since MinGW does the job adequately. None of the core developers work with MSVC++ and thus this item is low priority. Should you find the silver bullet that solves this problem, feel free to shoot it at us. We strongly recommend you to move over from MSVC++ to MinGW tools.
Yes, but the Cygwin or MinGW tools must be used to compile FFmpeg. Read the Windows section in the FFmpeg documentation to find more information. To get help and instructions for building FFmpeg under Windows, check out the FFmpeg Windows Help Forum at http://ffmpeg.arrozcru.org/.
No. These tools are too bloated and they complicate the build.
FFmpeg is already organized in a highly modular manner and does not need to be rewritten in a formal object language. Further, many of the developers favor straight C; it works for them. For more arguments on this matter, read "Programming Religion" at (http://www.tux.org/lkml/#s15).
The build process creates ffmpeg_g, ffplay_g, etc. which contain full debug information. Those binaries are stripped to create ffmpeg, ffplay, etc. If you need the debug information, used the *_g versions.
Yes, as long as the code is optional and can easily and cleanly be placed under #ifdef CONFIG_GPL without breaking anything. So for example a new codec or filter would be OK under GPL while a bug fix to LGPL code would not.
Common code is in its own files in libav* and is used by the individual codecs. They will not work without the common parts, you have to compile the whole libav*. If you wish, disable some parts with configure switches. You can also try to hack it and remove more, but if you had problems fixing the compilation failure then you are probably not qualified for this.
FFmpeg is a pure C project, so to use the libraries within your C++ application
you need to explicitly state that you are using a C library. You can do this by
encompassing your FFmpeg includes using extern "C"
.
See http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3
You have to implement a URLProtocol, see libavformat/file.c in FFmpeg and libmpdemux/demux_lavf.c in MPlayer sources.
The standard MSys bash (2.04) is broken. You need to install 2.05 or later.
The standard MSys install doesn't come with pr. You need to get it from the coreutils package.
RTP is a container format like any other, you must first depacketize the codec frames/samples stored in RTP and then feed to the decoder.
see http://www.iversenit.dk/dev/ffmpeg-headers/
see http://svn.ffmpeg.org/michael/trunk/docs/
Even if peculiar since it is network oriented, RTP is a container like any other. You have to demux RTP before feeding the payload to libavcodec. In this specific case please look at RFC 4629 to see how it should be done.
r_frame_rate is NOT the average framerate, it is the smallest framerate that can accurately represent all timestamps. So no, it is not wrong if it is larger than the average! For example, if you have mixed 25 and 30 fps content, then r_frame_rate will be 150.
This document was generated on 13 January 2009 using texi2html 1.56k.