First post here.
I’m working on the ingesting pipeline for our studio, and one of the issues that we currently have is that many times we receive images without knowing their colorspaces.
If we’re talking of an exr we can assume linear, but how to deal with other image formats?
Jpgs and png could be sRGB, if there’s an ICC embedded.
But what about footage coming from Alexas, REDs, etc…?
Unfortunately getting this info together with the footage it’s not always straightforward - for a tons of reasons.
So I was wondering if anybody has any tech notion to share about how one could autodetect the colorspace of a target image, given its extension, bit depth information and maybe how the values are distributed?
Maybe some deep learning magic - after training a large enough dataset ?
Has anybody any hints/tips/whatever to offer?