Recommended Sticky - Do not export compressed from AE


This issue is raised so often here I feel that it deserves its own sticky post. Most professional (or even semi-professional) AE artists render their files uncompressed out of After Effects and use a third party compression application for delivery files, such as Divx, Xvid, Sorenson etc. The reasons for this are numerous, but here’s a few obvious ones:

  1. If the compression is just slightly too much for your tastes, you have to completely re-render out of After Effects. If your file took 3 hours to render, that is a lot of wasted time.

  2. If you or your client decide to use a different delivery codec, you have to re-render out of After Effects. See note above on why this is bad

  3. Third party compression applications are designed to do a very good job of crushing file size down while retaining quality. After Effects is better suited for uncompressed, broadcast quality output since that is what the application is intended for. Right tool for the job

  4. Complaints that rendering out of After Effects to an uncompressed file eat up too much hard drive space are a bit silly. You spent How much on After Effects, yet you can’t afford $70 US for a 320GB hard drive???

  5. There are many free (or very cheap) third party compression applications out there. Here is a starter list:

Super C- FREE - compresses to just about every Windows and Quicktime format available
Riva FLV Encoder - FREE - Flash video encoder
VirtualDub - FREE - Primarily for AVI files, but works great
STOIK Video Convertor - FREE - works with a wide variety of codecs
Quicktime Pro - $29.95 US - Encodes to virtually all Quicktime supported codecs. Requires Flash to be installed to encode to FLV.

I’m sure others will have many apps to add to the list.


Great advice,



Hear, hear.

I’ll shamelessly plug my Quicktime compression tut here:

Quicktime (Pro) Codec Settings

Of course there are endless possibilities depending on needs, but these are the settings I use for downloadable/streamable demo reels etc.

  • Jonas


i would change that “always render uncompressed” to “always render to a lossless codec”

  • here are a few ( but not a complete list ! ):

if it’s QT:

  • use PNG,TGA or TIFF,or if installed “Techsmith’s Ensharpen”

if it’s AVI :

  • HuffYUV 2.1.1… (great and very fast, has probs over 2GB !)

  • AlparySoft … (google it! ) Great , free Beta

  • MorganMM JPG2000 has lossless setting

  • Techsmith Codec… initially for screencapture, but has good compression on flat /cel-shadedlike clips, very slow when moving playhead (eg. in your editing suite)

  • PicVideo ( has “Losless JPEG” and “Wavelet 2000” Codecs

As for Virtualdub - which doesn’t read WMV and MPG2 - look for “Virtualdub-MPEG2” - or simply google for “Avery Lee” and “fccHandler” which does all these things :wink:

   Sorry Spacefrog, but I completely disagree with your posted suggestions. The point in rendering to an uncompressed format is to have a FULL QUALITY, archive master of your work. None of the codec choices you list are even remotely suitable for this task:
   - Techsmith Ensharpen - This codec CAN perform similar to the animation codec when no gradient information is present (for example, desktop screen captures). But for most motion graphics and ALL video, this codec is not a good choice at all. It is also non-standard, meaning most apps will not read it properly and, should you need to share the file with other studios or artists there is a 98% chance they will not have this codec.
   HuffyYUV, AlparySoft, etc - None of these are good codecs for any portion of the video production process, much less archival or masters. The main reason is that they are completely non-standard and probably 5% of all computer users, even in this industry, will even have them installed. People need to accept that, while there are 8,000 boutique codecs out there, the vast majority are NOT suitable for video production! Go ask 20 studios if they use these codecs, and I bet you not one of them will say yes.
   - JPEG2000 - Is a decent option, but again isn't standard yet and many apps have issues working with this codec, making it a poor choice due to its limitations. This may change in the coming years, but for now I wouldn't use it for anything.
   - VirtualDub MPEG2 - Mpeg2 is massively compressed and is probably one of the worst choices available for your masters.

A lot of people out there who are used to ripping DVDs and the like have become accustomed to these whacky codecs and workflows, but they MUST be abandonded if you plan to be a video artist. In professional video production, stability and quality are the two chief concerns and these boutique codecs offer neither. I assure you, the first time you pass off a file to a client in DivX, HuffyYUV etc. you will get a look that makes it clear what they are thinking of you - “This person is an amateur.”


Nope, absolutely not. Your thinking is crooked in that it would only have real value if we are strictly sticking to an 8bpc RGB/ 10bpc YUV workflow. But hey, this is 2007, who does that anymore? Even so, it becomes a question of which quantization and color space transforms CoDecs employ and neitehr of the choices you provided gives any clear and predictable results in that regard. The only reliable formats that can truly count as “lossless” are

a) uncompressed image files that
b) maintain the native color space,
c) do not impose wrong Gamma profiles,
d) can hold additional color management information/ color profiles and
e) ideally support larger bit-depths.

From within AE this limits it pretty much to TIF, PSD and OpenEXR. You are mostly proposing non-standardized formats and procedures which is absolutely wrong and terrible to do. Can you even remotely assume that let’s say HuffYUV will have a version compatible with the then-successor of Windows Vista in 6 or 7 years? You can’t and therefore anyone would be ill-advised to follow your procedures.



I thought you were talking about rendering here and not archiving,
the talk was about multiformatconversions too…not about delivering the things to a different studio or whatever…

if you keep all your AFX rendered output for 7 years… well then you should better find formats that you will be able to use than, and if you need to keep an old rusty XP workstation…

i always thought only the source material and footage and the projectfiles are worth getting archived…and of AFX output mostly as an intermediate … at least in the long term…

PNG, TIFF (not JPG o’course), TGA are 100% losless - all can be encapsuled in Quicktime…

you are right huffYUV does colorspaceconversions,

MJPEG2000 lossless does wavelet encoding and does it lossless…

I mentioned VirtualDubMP2, cause it’s listed in the initial post as tool to do conversions,
Of course it’s not a codec it only was mentioned as a tool,MP2 is lossless of course

but i see - uncompressed everything is far better - and i’m a whacky DVD ripping loser :wink:

That's mostly how I see it too. When I'm talking about Masters and Archives, I am including the renders out of After Effects that [i]become source material for the edit[/i], etc. since they must be of the highest quality.

I agree, but the issue presented here is that “lossless” does not mean “no loss.” There isn’t really such a thing as 100% lossless. That would mean they 100% throw away only some information, which doesn’t make much sense. Lossless codecs throw away as little information as possible, but they ARE still throwing information away.

It really comes down to the source material you are using. An apt comparison is GIF vs. JPEG. GIFs are great for encoding blocky images, like solid colors or aliased text. Encode a photo or anything with gradients in GIF format and they will look horrible and the file will still be huge. Solids in JPEG format inherit a lot of pixelation and don’t look as clean as they could, but JPEG works great on photos and other gradated material.

In the video world, lossless codecs are like GIFs - they work well on solid blocks of color and mildly gradated material, but are throwing info away in favor of file size. Uncompressed works great on video footage.

A good test is to create a white solid and output a one minute version of it in both uncompressed and Animation codec. They will look identical, but the Animation codec version will be far smaller. Now run some video footage through the same test. The file sizes will be almost the same, but the Animation codec footage will have tossed valuable data in the process.

Uncompressed everything IS far better, I assure you. It's the way all modern studios work. As for being a "wacky DVD ripping loser" I hope my comment upstream didn't offend you and I SURE didn't mean to imply anyone is a loser. It just seems that a lot of people here who have codec problems originated in that crowd, given the tools they were familiar with such as VirtualDub. It was an observation, not an assault.


I agree, but the issue presented here is that “lossless” does not mean “no loss.” There isn’t really such a thing as 100% lossless. That would mean they 100% throw away only some information, which doesn’t make much sense. Lossless codecs throw away as little information as possible, but they ARE still throwing information away

this is wrong !

lossless is lossless - only case that they are NOT lossless is if they do some lossy colorspaceconversion ( eg. YUV kind of codecs) during the compression - if they don’t do this - and use a lossless compression algorithm - they are 100% lossless. Of course if your output uses 32 bit per colorchannel you got to use a format that supports this…

eg. PNG is 100% lossless - so is TGA and most TIFF compression options ( LZW etc…)
and many other lossless codecs are 100% lossles - how do you expect ZIP,RAR and so on being able to compress programmcode - where a single switched bit would cause a crash ?

JPG or MPEG on the other hand is LOSSY- cause it is using FFT (Fast Fourier Transformation) and than data reduction in the frequency band -> which means throwing away least significant data

this is Shannon’s Information Theory at work - Information Entropy - if you get 1000 zeros in a row, don’t store 1000 zeros - but store the value zero and the count of zeros instead - and you got a compressionratio of 1000:2 - easy isn’t it ?

and NO data loss happens… every bit of output is the same as the input
You can get GOOD compression ratio’s without missing a single bit of information…

and i never was talking about things like the crappy old Quicktime “Animation” codec - holy lord …
and the thing talking of me being a loser - it was rather a joke …
but it seems because there are these kind of people out there ( using DVD ripping all codecs inclusive…) who do not understand what codecs do to the material, people here in this thread seem to look at codecs as being some kind of sickness in general…

Of course i understand that in a big studioenvironments , the usage of codecs without coordination is a bad thing - but ´since i’m a freelancer doing many things for myself on my workstation and delivering end-products to the customer or a third-party on the end of the project most of the time only - performance is far better when you do not have to shuffle 100’s of Gigs arround evertime you render little changes…

throw severall nested layers into aftereffects - each layer loads its content from disk -> it’S far better to have small filesizes there if you havent got a whole RAID dedicated to every layer…


My post was HORRIBLY worded, I was distracted with work when I wrote it. I didn’t mean to imply that there was significant visual loss of information with lossless codecs, I only meant what you have said here (much more elegantly than I). The point of lossless codecs is to toss information where it can without degredation to the image, but there IS loss of information happening. On this issue we are in agreement.

The Animation codec is just as good as any of the other lossless formats from everything I’ve seen and experienced. It shows no visual loss whatsoever in “difference mode” tests.

But, as stated in my original post, quality loss is not the only issue here, there is still workflow and interopability. For example, while PNG is lossless and files are fairly small (I use it for 3D renders all the time), it is not a GREAT option simply because moving it between Mac and PC causes gamma shifts thanks to its seemingly broken method of using embedded color profiles. This is a deal killer should you EVER need to share the file between platforms. TIFF sequences are great, and are a recommended file format for visual effects work and transferring throughout the production pipeline. They can also be pulled into Quicktime pro to create a TIFF encoded Quicktime, which is much easier to handle than 10,000 frames.

I understand that you are a freelancer and work your own way until you ship out a file to the client. Ultimately, that is the motive for MOST people posting here who use a litany of odd codecs and workflows. The problem is, are you always going to be a freelancer? Are you freelancing for studios that will expect to receive all of your source materials at the end of the job, and may be dismayed to find the files in various non-standard codecs, etc.? And finally, when the majority of studios gravitate towards certain file types and workflows, it can be assumed that there is a good reason for that, primarily the combined decades of experience driving their decision. That experience should be heeded in my opinion, regardless of where, when or how you work.


I just want to say I have on many short film and commercial projects, including all my own animations and showreels, used Quicktime Animation at 100% as master, and have been very happy with it. I have once had an issue with scrambled frames, but setting the keyframe count to 1 instead of the default 15 resolved it, so that’s what I work with now.

On my latest 15 minute short film project, which I am onlining as we speak, I am using a TIFF sequence as master, but nearing the end, I have discovered that my external disk drive simply can’t contain that many files! I know, I know, ZIP them. But that means I won’t be able to read them directly off the disk as a file sequence.

  • Jonas


what about canopus procoder? would that cound as a good converter software?


Procoder is a highly recommended conversion tool. I use it frequently.


I think you have this wrong. In fact, I don’t think spacefrog agrees with you at all. That’s not the point of lossless at all. The term ‘lossless’ can only be used when the exact, original data can be reproduced from the compressed version. Bit for bit. Which translates to ZERO data loss/degradation.


Well, I’d say “lossless” was “no loss”, too. But I’m open to new ideas. :slight_smile:

Beenyweenies, what do you mean when you say “without degredation” at the same time as “there is loss of information”?

Exactly what information is lost if there is no degretation?

  • Jonas

            avinashlobo, you are assuming that data loss and visual degredation are tied together, which is the wrong way to think with regards to codecs - their [i]entire existence[/i] is to remove data where possible while degrading the image as little as possible. Lossless codecs are no exception, the only difference is they will not toss information that will visually degrade the image (as I explain below). You are also overlooking one major thing - Lossless codecs actually claim to be [u][i][b]visually lossless[/b][/i][/u], not 100% bit-for-bit dupes of the original file. That would be an uncompressed file, NOT lossless.
            Think about it this way. If you render a one minute HD-sized video of nothing but a black solid out to the Quicktime Animation codec, the file will be very small. Render one minute of video footage at HD size to the Animation codec, and the file will be utterly huge.
                   The reason is that, with the black solid video, the codec can toss pretty much everything except the information needed to create a single black pixel, which is then used to describe the entire frame since it's made up entirely of black pixels with no variation. Render the same movie but with three different color solids, and the codec only needs to visually describe three different colored pixels. With the video footage render, almost every pixel is unique (especially from frame to frame) and therefore the codec cannot toss information without altering pixels (resulting in [i]visual loss[/i] of quality), so the file is very large as a result.

Therefore, [b]it IS tossing information[/b], but only where it can without [i][b]visually[/b][/i] degrading the image. If you compare the Animation codec version to the original, they are visually identical (and placing one over the other in "Difference" mode within AE would confirm this) BUT information was tossed. Were it not for lossless codecs operating in this way there would be no need for the lossless algo, only uncompressed and lossy codecs!
       VISUAL LOSS is the keyword here - in the technical documentation lossless codecs only claim to be visually lossless, which is a massive distinction from a bit-for-bit duplicate of the original.


Ah, now I see the misunderstanding as clear as day. :slight_smile:

Well, that is EXACTLY the way to think about data compression (also with regards to codecs).

Beenyweenies, you are incredibly clever about codecs, but you’ve got the wrong idea about the meaning of “lossless compression”.

“Lossless” means that all original data can be restored. Just like when you compress to a ZIP file. -You get a smaller file, but you can unpack ALL ORIGINAL DATA without loss.

When talking about video codecs, the data is pixel data. And if a file is compressed with NO VISUAL DEGREDATION, that means ALL PIXEL DATA was restored. And that means NO LOSS.

Your example with the black image video file is a perfect example. A “lossless” compression WILL result in a much smaller file, but that doesn’t mean any information was lost, it just means the information was re-organized in a more optimal way, resulting in a smaller file, which is exactly the purpose of lossless compression.

Ie, instead of saying “black pixel” 1,000 times, it’s shorter to say “1,000 black pixels”. That’s lossless image compression in a nutshell.

  • Jonas


Yeah, Jonas is right. Lossless works like a zip or rar file. There is NO tossing of data. To put it in a very, very crude way - there is substitution of repetitive data with smaller data (refer to Jonas’ example in his post). When required, the exact, original data can be reproduced from the substitute data. And it is perfectly exact. There is no shifting of bits or any alteration to the original data.

What you, Brendon, are describing is, in fact, high-quality LOSSY compression, where there is no apparent VISUAL degradation, but there is still data degradation. For example, save a file out as a maximum quality JPEG and keep it next to the original uncompressed version. The JPEG will be a fraction of the size, but there will be no VISUAL distinction between the two, unless you’re gifted with superhuman eyes.


Sorry, but you guys are both stuck on the idea that lossless is some bit-for-bit exact replica, when it just isn’t. The point that data loss and visual loss aren’t tied together is also getting lost in the mix here.

  As was mentioned, if you replace 1,000 instances of the words "black pixel" with a single instance of "1,000 black pixels" you have just thrown out 999 entries and thus reduced the size of the file without impacting the resulting image. [i]It is tossing data, 999 entries to be exact, otherwise the file would not be smaller.[/i] It's not like codecs are some magic suitcase that stuff the exact same bits into a smaller package, they merely consolidate the information and toss the redundant stuff. The resulting file is [b]not a bit-for-bit dupe[/b], it has been completely reorganized and redefined, the codec just knows how to read this new file and produce the same visual result. Because of this, the RAR/ZIP analogy is incorrect. It is more akin to a vector-based artwork, in which the program stores vector data rather than pixel data to reduce file size.

Visual loss is the shifting of pixel values to brute-force match them to the definition “black pixel” and thus enable the codec to roll them into the compression scheme, “1,000 black pixels.” Since you can throw away data without resorting to this, DATA loss (or “restructuring” if you prefer) and VISUAL loss are not tied together as I stated.


Hi again. :slight_smile:

No, obviously the COMPRESSED file is not a bit-for-bit replica, we never said that.

No, since the INFORMATION (data) about the 1,000 black pixels is still 100% intact, nothing is tossed or lost, it’s just being re-described in a shorter way, which is exactly the way ZIP/RAR etc. compression works.

Tossing data would be if you had a white pixel in there, and you “left it out”, and kept saying “1,000 black pixels” instead of “500 black pixels, 1 white pixel, 499 black pixels”. THAT would be toss & loss.

“Loss” in terms of compression does NOT mean “a smaller file size”, because obviously that’s what “compression” means. “Loss” only refers to if you lose something in the compression or not, ie, if you are unable to restore all original data or not. If not, you have LOST data, and the compression is LOSSY.

Lossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data


  • Jonas