-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add color space information to documentation #37
Comments
Personally I think this is sufficiently covered in that we link to the external definitions of the compressed texture formats, and purposefully don't attempt to repeat those. Perhaps we could make explicit in the documentation that Hap uses the linear (not sRGB) variants for all the formats. |
Aren't the DXT formats color space agnostic? And I believe that tools like FFmpeg encodes the same color transfer characteristics as the source into HAP by default. So if you transcode a YUV video using the BT. 709/BT. 2020 color space into HAP you end up with non-linear values. And if you transcode a tiff image sequence into HAP you probably end up with sRGB values. The problem is on the decoder/renderer side, where there is no meta information about what was encoded. In most cases the difference between gamma 2.2, sRGB and BT.709/BT.2020 is not very noticeable. But in some situations like masking it is. But if HAP is like the underlying formats color space agnostic then some meta data storing this information would be handy in the container. |
OK, this definitely isn't sufficiently covered if it isn't clear... To put it another way, the intention is that frames will be drawn with no further color space conversion (by you) and encoders should convert to linear values as part of encoding. Perhaps that should be added to the spec to make it clear (or do suggest an improvement on it if I'm still not clear). In reality, many encoders simply pass through what they get in, and some do write metadata to the container describing a non-linear color space matching the source. Container formats are beyond the remit of the Hap spec, but I would probably advise that for rendering you go with whatever the container says - even if it is against the intention of the Hap spec. |
Linear encoding is very uncommon for visual media especially if you only have 8-bits per color component or less. So I don't think it's linear but I might be wrong. Most displays are non-linear (usually sRGB) and most graphics APIs just pass-through the pixels without any color space conversions. If you load an image into a texture, and render that texture then its in sRGB all the way from file to screen without any additional conversion/processing. If you load a video frame into a texture and render it, you will introduce a small error due to the difference between sRGB's transfer curve and the video transfer curve (most likely to be Rec. 709 these days). The result will still look acceptable. Rendering a linear encoded picture on a sRGB screen without any processing will just look wrong and very bright.. I would assume that if you encode HAP using FFmpeg you will end up with sRGB if the input comes from an image sequence or Rec. 601/709/2020 (they all use the same transfer curve) if the input comes from a video format. If the HAP comes from adobe media encoder then it's certainly Rec. 709. This is a bit ambiguous. My point here is that on the renderer side it's impossible to know which non-linear transfer curve is used. I don't think you can get that info from the container since it should be stored in each stream and the stream of HAP packets doesn't seem to contain that kind of info. Most of the issues I have seen is when using HAP Alpha and exporting it from Adobe Premiere and then importing it again and blend it with some other content. It seems that the exporter exports non-linear alpha (Rec. 709) and the importer reads the alpha lineary (which should be correct for sRGB). If the HAP specification doesn't specify which non-linear curve should be used, and if there is no meta data, the renderer must simply guess how to interpret the video or let the user select it. Have a look here for a comparison between the two transfer curves: https://www.image-engineering.de/library/technotes/714-color-spaces-rec-709-vs-srgb Normally it's not a big deal but when it comes to compositing all textures should be linearized into a common linear space and the resulting buffer should be de-linearized using the transfer curve of the display. This is pretty simple to do using shaders when the transforms are known. |
hmm, i thought colorspace/xfer func/matrix vals are provided optionally by the container (depends on whether or not your encoder wrote them), and not the codec? for example, here's a quicktime container with a hap1 track that is tagged as 709 (the images in the file weren't converted to 709 prior to encoding- they're just tagged as such as a proof of concept) via the NCLC tags in the quicktime container: ...this was done by modifying the encoder i used to create this file to tag each frame's sample buffer with the appropriate 709 attributes:
|
Add color space information to documentation. Is HAP using an encoded gamma or is it sRGB or something else? It's good to know how to linearize HAP for compositing.
The text was updated successfully, but these errors were encountered: