8bit JPEG in 2017 ... WHY?!

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

Thread Tools Display Modes
  01 January 2018
No but you were basically describing your TV production standards like it was a 'per country' thing.
Yes as there may be very little film production circumstantially in your country-but there is no reason technically why you couldn't if you wanted to.
Its not forbidden by law or anything.

My point was Film production and those that work in it-won't have much use for 8bit JPEGs and would therefore agree with the OP.
If somebody gives me an asset with JPEGs in it it had better not be final publish.
  01 January 2018
Originally Posted by luisRiera: Mp3 is a compressed format.. sometimes is better to work with whats more practical to use because if not, we would still be using phonograph records -many claim the quality is better- and a lot of storage space.

This is a whole different topic and not a good comparison.
The main reason why phonograph records sound so different (and some people like that sound more) is because are mastered very differently.
For example the low frequencies (aka bass) sounds are converted to mono for example and the dynamic range is different... there's also that 'lo-fo' sound because of crackle caused by dust particles hitting the needle etc... Not to mention the 'placebo' effect of ppl just liking vinyl more because they think its 'cooler'.

A better comparison would be WAV/FLAC vs MP3/AAC and 8bit vs 16bit vs 24bit vs 32bit.
No professional music producer will use MP3/AAC samples instead of FLAC/WAV ones in his production.
If they know what they're doing they will all tell you the higher the sample rate and the higher the bitrate the better.
I know a lot of sample packs out there are only 44.1khz 16bit, but actually that's just not good (unless you arent going to process/adjust them including not changing the volume/gain? riiight...). The good ones are 96khz 24bit.
If you finish your music album and render it to MP3/AAC to send that to a mastering house I can guarantee you you will get a reply that they demand lossless audio preferably 96khz+ 24bit+
And sure they will make/supply you with a version mastered for MP3/AAC usage (for streaming and/or portable audio) if you ask... but they will still demand a lossless HQ source or it'll end up sounding bad.

MP3/AAC are designed around psycho acoustic modeling. That basically means they cut out what the human ear doesn't notice anyway (or at low bitrates doesn't notice as much as other parts).

Same applies to JPEG.
JPEG by default applies the same philosophy but applied to the human eye and visual cortex.
The idea is it cuts information away that the human eye wont notice or will not notice as much.
For example JPEG uses half the resolution for the color information because we don't notice it as much as the luminosity resolution.


JPEG @100% quality chroma subsampling:

And many more things like this which introduce all kinds of artifacts... all to keep the file size as small as possible.
Which is ok if you're going to use it for a web version of your final output (though jpeg could use a replacement as its getting old not up to modern standards).
But far from ideal for final master quality or for texture/masking work and even less so when you are going to use them in a way that the information gets altered/processed.
Because they lack all that information that the jpeg compression cut out and suddenly those 'holes' start to show up which results in all kinds of ugly artifacts and other problems.

Just like in audio...
Just 1 example: the human ear only hears form 20hz to 20khz so everything else gets cut in an mp3/aac files... but if you use that as a source and process it and need to play at half the speed & pitch for example... then that will result in audio that has 10hz to 10khz information ... so everything above 10khz is lost and this is verry noticable!
This is just one simple example there are many things like this that will make a process lossy format sound/look really bad.

Last edited by ACiD80 : 01 January 2018 at 07:18 PM.
  01 January 2018
Originally Posted by luisRiera: Dont get me wrong.. im not talking about working with a mediocre quality.. its just... fine details are lost in television.. For example, often clients procure the designers an specific pantone for the color of the brand we are working for.. but, that also is very subjective when its in the air.. do a quick search for the coca-cola logo on google, and you'll notice, not every logo is the same red color.

Just because the final way of processing the image/video/audio is bad doesnt mean you dont have to care about the quality you deliver to them.
A high quality source processed badly will still look better than a bad quality source processed badly... as I explained previously in a previous post.
It's still your job to make it as good as possible.
Not to mention the quality for archiving reasons in case you need to re-use things for later projects.

Just because the internet is full of images that look like garbage doesnt mean you dont have to care about it as a professional!!
The net is full of 256x256px gifs with 16 indexed colors... So you dont see a problem applying that to your pipeline aswell?

Last edited by ACiD80 : 01 January 2018 at 06:41 PM.
  01 January 2018
Exactly. Maybe your customer is happy with a 8 bit JPEGs sequence.
But if you are smart your studio's 'Internal Master' will be a high quality as possible for the context of that job.
Down-rez from large format looks awesome. But up Rezing (from small to large) looks like shite.

Also add losing a generation or two as you try and glob a demo reel for your studio can be an issue as well.
A copy of a copy from a low rez source shows degradation to a point you likely don't want to show it to anyone you want to impress
(like a new client or studio).

Last edited by circusboy : 01 January 2018 at 07:16 PM.
  02 February 2018
Originally Posted by ACiD80: Here are some quickly googled examples of what is different about 8bit vs several higher bit depths.
I think several people will recognize these artifacts, especially after processing an 8bit image in comp...
I hope this helps showing why you should never again use 8bit (except for sending a preview, for use on the web)

Keep in mind that these are all incredibly exaggerated examples. Even the sunset is using steps of 7 for the banding to make it look worse.
Matthew O'Neill
  02 February 2018
Originally Posted by circusboy: No but you were basically describing your TV production standards like it was a 'per country' thing.
Yes as there may be very little film production circumstantially in your country-but there is no reason technically why you couldn't if you wanted to.
Its not forbidden by law or anything.
I think there absolutely is an element of different standards in different countries. Do you think for a moment that a south american tv soap opera would hold up quality-wise to be broadcast on a UK, JP or US national tv station? Some countries will be happy using effects recorded on a green screen with 4:2:2 or 4:2:0 chroma sampling, but in other places where there are higher quality expectations, you'll need to be recording at 4:4:4 to not get fired.

Yes film has even higher standards, but I wouldn't rule out a Bollywood movie having more compression artifacts than a Hollywood movie.
Matthew O'Neill
  02 February 2018
Originally Posted by imashination: Keep in mind that these are all incredibly exaggerated examples. Even the sunset is using steps of 7 for the banding to make it look worse.

I wouldn't say incredibly exaggerated... You can get that banding relatively quick when using jpeg or in case of video something like H264 with bluray specs.
Sure 8bit is ok'ish for simple viewing on a standard sRGB (non-HDR) screen... but that's not the point here.
We're especially talking about a professional workflow/pipeline where these images get adjusted and used as textures in a render which will apply all kind of lighting/shading to it and that render in turn will probably get processed again... and if these textures are 8bit images things can get ugly quick + it'll also provide your render engine with less accurate data.
Combine that with using JPEG compression and things will just not be as good as they could've been and potentially turn into absolute garbage (depending on several factors).

These days there is absolutely no reason to not use 16bpc images in your workflow (storage has become cheap).
Even back in the day the studio's who knew what they were doing used 10bit.
For example, if you are using footage that has been recorded in log you'll get into trouble if it's only 8bit.

Last edited by ACiD80 : 02 February 2018 at 06:39 PM.
  02 February 2018
Yeah really. With FX and Comp you can hit these issues like this all too easily if cheats are made.
If you try and work with these low rez files in a hirez context they will look like a grave mistake alongside everything else which is correct for hirez.
Like say you are trying to learn I don't know- a new texture solution or something...

Which is - I think- is the point of the thread. Don't assume everyone is rendering to Facebook. Use some 'this is a cheat' context and/or industry standards
when describing your input and output choices to others. Otherwise you might lose some credibility with what you are trying to talk about or service you are trying to provide.

Thank you.

Last edited by circusboy : 02 February 2018 at 07:06 PM.
reply share thread

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Society of Digital Artists

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump

All times are GMT. The time now is 02:46 PM.

Powered by vBulletin
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.