Andy Minuth

Premium+
  • Posts

    148
  • Joined

  • Last visited

Posts posted by Andy Minuth

  1. Welcome Kristopher, the scenes look nice. Two comments: 

    • Coming back to the same shots, was a bit too repetitive for me. I think there is some potential too tighten it up.
    • I suggest to put your name at the end. Usually people have to see something interesting, before they pay real attention. So if someone likes something in the reel, his attention will go up, and hopefully he remembers your name at the end. 
    • Like 4
  2. On 2/12/2017 at 1:11 PM, Nicolas Hanson said:

    Does Clipster retain all the image information and pay attention to color spaces etc?

    AFAIK there are color management tools in Clipster. But we are not using them. We are just passing through the images. For a feature for example we are rendering 12bit X'Y'Z' Tiff files for the DCDM and dpx/ProRes in Rec.709 from Baselight. 

    Syncing with sound, subtitles and DCP encoding is then done with Clipster. 

    • Like 2
  3. I am not quite sure what you are driving at. But with the 'Colour-Space' operator you can convert the image to 'RLab' which is related to CIE L*a*b*. The channels will be mapped like this:

    R -> red-green compononent

    G -> luminance component

    B -> blue-yellow component

    I am using this space sometimes for special treatment. I recommend to convert back to your working space after your RLab operations. 

    • Like 3
  4. The full name is 'Recommendation ITU-R BT.709 - Parameter values for the HDTV standards for production and international programme exchange'

    Recommendation: The ITU is responsible for defining worldwide standards. They are then published as recommendations. 

    ITU-R: International Telecommunication Union Radiocommunication Sector 

    BT: This recommendation is part of the 'Broadcasting service (television)' series.

    709: The number of this paper. 

     

    Rec.709, BT.709, etc. are different abbreviations for the same thing.

    You can download the paper for free here: http://www.itu.int/rec/R-REC-BT.709-6-201506-I/en

     

    Fun fact: As far as I know, a transfer curve for displays (e.g. for mastering or color grading) was not defined in ITU-R BT.709. Everyone was using Sony CRTs back then, which made them the de facto standard. In ITU-R BT.1886 the eotf for displays was finally defined.

    • Like 4
  5. Hey there,

    as I am not a native english speaker, I wondered if there is a colloquial term of creative people that describes the technical 'Gamma'? 

    In German for example there are terms that literally translate to 'heaviness' ("Schwere") and 'airiness' ("Luftigkeit"). They roughly describe what we control with the gamma parameter in a video signal (brightness and contrast at the same time without blowing out the whites or blacks).

    I know that every client has their own vocabulary and there are different meanings of the same word. But can You give some examples of what non-technical people say to You, when they ask for a gamma adjustment? 

    Best, Andy

    • Like 1
  6. Welcome to the forum, Mark.

    On 1/11/2017 at 1:24 AM, Mark said:

    also, in mistika at times it was good to be able to make a selection and either effect or recover just the luma or chroma from a source or previous layer - again it maybe because im relying to much on the inside\outside grade to do everything - but is there a similar way to achieve that on baselight? 

    This You can do with the layer Blend-Modes (e.g. Luminance, Color). You can also choose a different 'blend-source', for example the original image or another layer. See this tutorial for an introduction.

    About Your first question I am not quite sure. A classic edge-detection tool is not available, unfortunately. Probably You are able to do a matte like this with 'MatteTool' (Erode/Dilate, In Out Blur) and Layer Blend modes, I guess. But the standard sharpen tool in Baselight is extremely powerful, when You learn about all the parameters. 

    • Like 3
  7. You are right, Paul. But the formula/matrix based transforms are also more "boring" and less popular among colourists. In general I suggest to use whatever transform one likes to go from Log to a display space. But for the inverse I advise to choose a clean one, that is not based on film or includes gamut de-compression. 

    • Like 3
  8. By 'Linear' I assume You mean a videoish space like Rec.709 / 1886.

    The problem is that there is no perfect conversion from a low dynamic range color space like Rec.709 to a high dynamic range space like Cineon-Log. Going the usual way from Log to Rec.709 some information is lost, especially in the very bright and dark and also highly saturated areas. When you try to invert this lossy process, usually artefacts appear. 

    D21_Legal_vs_Extended_small.png

    This is a LogC to Rec.1886 Curve that I plotted years ago (for Legal and Extended Range which is unimportant here). Pay attention to the flat parts of the curve in the shadows and highlights. When you invert a curve like this, these parts are very steep and these code values are getting stretched a lot. This means that a tiny difference in the original results in a big difference in the result.

    TruelightCAM_Inverse.pngRRT011_Inverse.png

    Here I put two luma waveform plots of different Video (Rec.1886) to Cineon-Log transforms. You can see that the first one is very smooth compared to the other one. The first one is practically free of artefacts, while the second one has some problems. This is just for the luma part of the transform, additional problems occur for the colours (but this is more difficult to visualise). 

    Filmlight optimised and smoothed the inverse transforms that are used by Truelight Colour Spaces within Baselight, to produce less artefacts. @Daniele Siragusano talks about it in this video around 06:20 - 06:50

     

    After the explanation, my suggestions how to solve it:

    • Generally I suggest to avoid a workflow like this (Video -> Log -> Video). But I know that usually the colourist cannot influence the camera settings ;) 
    • Try to use a different transform, that produces less artefacts. I don't know if there are choices in Resolve. In Baselight the cleanest one is the inverse transform of the 'Truelight Video 1' DRT.
    • Generally You can reduce the artefacts, when You avoid the extreme ends of the transform. This means it will probably work better if you reduce the contrast of the image, before you send it through the conversion. Once the picture is converted to Log without artefacts, You can increase the contrast as much as You want. If You have highly saturated colours in the picture, You can additionally try to reduce the saturation a bit before the conversion.
    • Like 5
  9. Yes, I was not precise enough about terminology in my last post:

    Luminance should not be used in this context, because it is an absolute value that describes the amount of light in physics. The light emitted by a display for example is described by luminance and measured in cd/m2.

    Here the term relative luminance Y or luma Y' should be used. Relative luminance Y is a linear value and normalised to 1 or 100. Luma Y' is the same achromatic part of an image but nonlinear, it uses gamma compression. In video systems usually luma is used, but the formula is also true for linear components. 

    Y = 0.2126 R + 0.7152 G + 0.0722 B

    Y' = 0.2126 R' + 0.7152 G' + 0.0722 B'

    Both formulas are valid. The ' indicates nonlinear values due to the gamma compression. These coefficients are intended for Rec.709 HD systems.

    For standard definition Rec.601 other coefficients are used:

    Y′ = 0.299 R′ + 0.587 G′ + 0.114 B′

    These coefficients are also intended for JPEG images, which might explain the use in Photoshop. A quote about the term luminosity from Wikipedia:

    Quote

    In Adobe Photoshop's imaging operations, luminosity is the term used incorrectly to refer to the luma component of a color image signal

     

    For further reading I recommend Charles Poynton and Wikipedia:

    http://www.poynton.com/PDFs/YUV_and_luminance_harmful.pdf

    http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html

    https://en.wikipedia.org/wiki/Luma_(video)

    • Like 3
  10. Contrast measurements are about the dynamic range of the projector. 

    Sequential contrast is determined by comparing a completely black frame with the maximum brightness in another frame. It describes kind of the the maximum contrast of the device.

    Intra-frame contrast in comparison describes the maximum contrast within one frame, which is usually much lower. It is usually measured with a black/white checkerboard pattern or with a small white patch in a mainly black frame. Important to note is that the lower intra-frame performance of DCI compliant projectors is NOT due to an adaptive iris or dynamic light dimming (which mess up images in consumer displays). The reason lies mainly in the imperfection of the projection lens, which lifts the blacks because of flare. We know the effect from camera lenses: If there is a very bright object in the frame, the black level even of technically decent lenses like Master Primes is lifted a lot.  

    To throw in some numbers, the DCI minimum specs and tolerances for example are:

    Sequential contrast:

    nominal 2000:1

    review rooms (e.g. grading) 1500:1

    theatres 1200:1

     

    Intra-frame contrast:

    nominal 150:1

    review rooms and theatres 100:1

     

    • Like 5
  11. My guess is that the creators of the game did not pay enough regard for human color perception. 

    I recommend to read a bit about MacAdam ellipses. An average human can not distinguish the color at the center of the ellipse from all colours inside it. In this picture the ellipses are 10x magnified in size, I overlayed the sRGB gamut, which is the one a computer display is usually working in. It is very obvious that colour differences in the green area are much more difficult to notice (ellipses are bigger).

    To explain the reasons for these perceptual differences, is beyond my knowledge.

    CIExy1931_MacAdam_sRGB_overlay.png

    • Like 2
  12. I am also interested in finding new podcast content.

    The Colorist Podcast that Nicolas mentioned is great.

    I listened to The Digital Cinema Cafe podcast. They touched color grading from time to time, but they stopped producing new episodes.

    Also 'The RC' by fxguide.com was interesting, but more focussed on digital cinematography. But got cancelled, too. 

    What remains is the fxpodcast by fxguide.com. It is focussed on VFX but sometimes they are also touching color. There is one episode with Peter Doyle for example.

    • Like 4
  13. Thanks for sharing Your first experience, Asa. Yeah, I think a bit more pop makes sense, and in my opinion one of the important things about HDR is to get more definition in the specular highlights and shadows. But I guess the tricky part is that bright objects do not get too distracting. But I guess that Ex Machina with its dark and also landscape sequences will look even better. 

    Are You working from your original Baselight scene, so that You can adjust the old layers? Or do You have to work with the rendered out master, which would be quite difficult I guess... ?

    • Like 4
  14. I can not share first hand experience, but at least some observations I made at IBC this year:

    There was a panel where the technical VPs of the big studios discussed HDR and WCG (wide colour gamut). They projected some A/B examples of standard P3 digital cinema and Dolby Vision.

    They told that until now, they graded the SDR main delivery first, and then asked the filmmakers and colorists to do the HDR/WCG version after that as a trim pass. They agreed, that none of the filmmakers disliked the 'bigger canvas'. But they also agreed that the filmmakers were probably less courageous with the new technology, because they already found a pleasing SDR look for their film and did not want to alter it too much. The goal should be to do the HDR/WCG version first and trim passes for the SDR deliveries.

    I enjoyed the examples that they showed a lot, but I also noticed two things:

    - 108 nits in the cinema is nice, but not as impressive as some people might expect. Basically it is just around one additional stop in the highlights. Way more interesting are the increased details in the shadows. 

    - technical issues with the footage like noise get boosted a lot. There is definitely less range for us to deal with footage that was exposed suboptimal ;). DPs will have to work more precise, until the next generation of camera sensors might improve the situation again. 

     

    Walking around the exhibition floor, HDR was THE THING this year. But looking at all these displays one could easily get the impression, that HDR means: an extremely saturated picture with ugly clipped highlights, that is very bright. Nice looking examples were rare.

    About displays: In my opinion the Sony X300 seems to be the best choice for HDR mastering at the moment. 

     

    Hope to hear about more hands-on experience in this thread...

    So long, Andy

    • Like 5
  15. 2 minutes ago, Tom Evans said:

    Is the Trulight standalone tool ment to be used to manage color on other color correction platforms, or as a tool to ensure a consistent color pipeline across departments, studios and systems?

    The standalone tool is intended to do a classic DI calibration. This means that its main purpose is to generate and tweak a 3D LUT from a set of colour-patches that went through an analogue film lab. This print emulation LUT was essential to get a decent film-out. With film labs closing around the globe, this workflow is getting less important.

    But Truelight can also be used to tweak or convert existing LUTs. Besides the main application it also includes a set of command line tools (tl utils) that are intended for advanced users. 

    To answer Your question: I think TCS (inside Baselight) is the more modern approach and probably the better way to output different LUTs for other systems. But it is nice to have the standalone Truelight additionally.

    • Like 3
  16. A few days ago I listened to this episode of fxguide podcast:

    fxpodcast #312: Take Care of Your Eyes

    I think this topic is very important especially for us colorists. Our eyes are essential for our work, we should keep them in good shape.

    On the linked website they published these advises:

    Quote

     

    20/20/20 Rule:

    For every 20 Minutes of focused work,
    Look away 20 feet (around 6 meters),
    Blink 20 times

    Tips for the health of your eyes:

    Drink Plenty of Water
    Good Diet (same advice as you'd get from a cardiologist)
    Wear Sunglasses
    Don't Smoke
    30 Minutes Physical Exercise, Minimum 3 times a week

     

    Additionally I noted these things during the podcast:

    - avoid too much airflow (AC, Ventilators, etc.) because it might dry out your eyes too fast

    - working on a projector is probalby less fatiguing for the eyes than working on a monitor that is much closer. The accommodation muscles in the eye are in a more relaxed position, when you are focussing on something between 6m and infinity. Personally I made a similar experience. I always felt that working long shifts on a display, that is quite close is more fatiguing than working on a big screen, that is meters away. 

    What is your opinion on that, and how do you approach the topic of eye health?

    • Like 5
  17. I think it depends on the use case.

    If you just want to calibrate your display devices, something like CalMan or LightSpace is probably the best choice. 

    If you want to build a consistent color pipeline across many departments, I think You will need several of the mentioned tools. OCIO is great to set up the VFX part, because it is available in Nuke and AfterEffects. But for the main color pipeline in the grading system, I use Truelight Colour Spaces (the colour framework within Baselight) and Truelight (standalone color management tool). 

    What I like about these tools:

    - I am a Baselight user, so it is quite handy to use the built in colour engineering

    - Besides technical, I have a lot of creative choices (e.g. DRTs), and I can easily add a creative grade to a transform

    - The scripting language is not too complicated. I am not a software developer and also not a colour scientist. But I am able to create new custom color spaces, and TCS takes care of all the rest.

    - It can import and export almost any kind of LUT. For example it is quite easy to export a custom lookup table for the Nuke artists, so that they can see the intended look of a show in sRGB or Rec1886, while You are working in P3.

    In Baselight 5 they will add some new interesting features in this area, for example a mastering colour space, that is a gamut limiter for wide gamut deliveries, etc.

    To be honest, I never intensively used the color management of Resolve, Autodesk, Scratch, etc. That is why I can't judge their performance in comparison to the Filmlight tools. I just wanted to explain my point of view.

     

    • Like 3
  18. On 9/17/2016 at 0:10 AM, Tom Evans said:

    Interesting topic! Is it possible to do the normalization process inside Resolve without manually working the curves or applying a LUT? 

    My Resolve knowledge is not up to date. I can't help You with this one, sorry.

    • Like 1
  19. Yes, for all cameras that are listed with a camera icon color-space (mostly log color-spaces). The camera manufacturers are using log curves to solve the problem of storing HDR images from the sensor in an efficient way.

    Most cameras can also shoot in Rec.709, which means they produce a video signal, that can go on-air immediately (and is SDR). In this case, a DRT is not needed, because it got already applied inside the camera, but You are loosing latitude for the grading. 

     

    • Like 3