Anton Meleshkevich

Premium+
  • Posts

    169
  • Joined

  • Last visited

Everything posted by Anton Meleshkevich

  1. Also, I was wrong here. Most likely (but not always I guess), white balance in RAW is the multiplication of the code values that come from sensor, hopefully before the debayering. So it's a gain operation in some linear color space, that has a native sensor gamut (which can be different between the actual units of the same camera model). At least this is correct for Alexa V3. I mean, the part that WB is performed in camera vendor native gamut.
  2. For gray scale only, so basically it's not identical, but very close. To be identical, after the linearization you also have to go from camera gamut to xyz, then to CAT02 using 3x3 matrix, then do gain (multiply) operation, then from CAT02 back to XYZ, then from XYZ back to camera gamut.
  3. RCM has a lot of bugs. I think the best is to stay in DaVinci YRGB and to set timeline colorspace according to your source footage. So for alexa source files you set timeline colorspace to Alexa LogC. So HDR pallete works as it should. And Global now behaves like WB in RAW (Gain in linear LMS) or like chromatic adaptation plugin. And for mixed cameras I'd use Color Space Transform effect. I use ACEScct timeline color space. And DCTL as IDT and ODT (it's works faster than ACES Transform effect). And global wheel behaves the same. The main reason why I use ACES is it's amazing gamut compressor (currently available as DCTL). I hope it will be included in Resolve out of the box with the release of ACES 1.3. Your file should be rec709(-ish) with linear segment near black or any kind of similar tone-mapping. Not a pure gamma 2.4, which is a display gamma only. So you probably should use rec709 scene, not gamma 2.4. I guess gamma 2.4 was default until v17 for creating DCP, where, I believe, transformation goes from timeline color space (to p3 gamma 2.6). So probably this is why timeline default color space is display gamma, not the gamma of the video file.
  4. Horizontal width should be about 30-40 degrees of your field of vision. https://www.rtings.com/tv/reviews/by-size/size-to-distance-relationship
  5. Chromatic adaptation plugin is a RGB gain multiply operation in linear gamma and in one of LMS color spaces you can choose from. This is the same tool as temp and tint in RAW, except for LMS, which is probably different and specific for particular sensor in RAW.
  6. Not sure if I get you right. But color space transformations usually create values below 0 and above 1. It's essential to use some gamut mapping before any color grading, especially for operations like saturation. In your particular example, converting from a wider gamut to a lower gamut and then baking it in a LUT can be done, it will just clamp anything below and above 0 and 1 and also bake artifacts depending on your primaries.
  7. Nvidia settings has nothing to do with resolve deliver settings. Just leave it auto in deliver page. It will set it correctly most of the time depending on the format you choose. And for nvidia settings - make sure, levels are set identically in your monitor and in your nvidia settings. either full-full or limited-limited. Usually, for PC consumer monitors, correct setting in nvidia settings is full. Because usually these monitors are also full levels and it can't be changed.
  8. Search for Paul Dore DTCL. He made Film Density DCTL OFX. It makes colorful pixels darker. It's based on HSV color model. And it also has RGB weights and qualifying sliders. If you don't want to use anything but resolve built-in tools, you can add two nodes. In the first node desaturate your image from default 50 to, say, 25. Then right click on the second node, change its color space to HSV and turn off R and B channels. Also set Lum Mix to 0. Just in case you accidentally touch a trackball instead of a wheel. Then increase Gain wheel to bring back saturation. This will affect saturation channel in HSV color model. Alternatively, instead of disabling R and B channels, you can just go to RGB mixer tab, turn off 'preserve luminance', and just increase green slider in the green channel. Then you can add it into a compound node and use its opacity to blend it to the image. UPD. I've just re-read your post and found I got you absolutely wrong, I'm sorry . I thought you're talking about subtractive colors. If you add some example pictures, it could help to answer your question I think.
  9. First is 16 bit DPX + tons of contrast in Resolve. Second is ProRes 4444 export with unmodified ProRes 4444 preset + the same amount of contrast in Resolve. Of course 12 bit is less then 16 bit. But here is definitely 8 bit banding. I compared it to 8, 10 and 12 bit export from Resolve. I attached 16 bit DPX, that should be imported in Premiere Pro 2020 (or Media Encoder 2020) on Windows, then exported with ProRes 4444 preset. all_tests00000001.dpx This magenta tint and 8 bit encoding both can be fixed by enabling 'Render at maximum depth'.
  10. Actually even DPX presets are 8 bit. Yes, it shows 12 or 10 or 16 bt in Resolve, but in fact there is 8 bit information. I tested it about 2 days long with 16 bit generated linear gray scale ramp. It is easy to see if add a lot of contrast in Resolve. Magenta offset tint and 8-bit banding. Sure! I'll be able to send you a file tomorrow
  11. ACEScc is the only log space that lets to adjust white balance and exposure similar to RAW across the whole dynamic range using the offset. Any other log curve (including ACEScct) has a toe, that makes it impossible to do correct exposure and WB adjustments with the offset. Shadows will be affected more than necessary. For me , the main advantage of ACES is the ACEScc. But not the Resolve version of ACES full of bugs. I'm talking about DCTLs. Biggest thanks to @Paul Dore for the ACES 1.2 plugin.
  12. In adobe 2020 products on windows default prores presets are 8 BIT!!! And have offset to magenta! This can be fixed by enabling 'render at maximum depth' or something like that. I don't exactly remember the name. But what can't be fixed are the wrong colors with prores from alexa. Looks like something with rec601 vs rec709. But you have to add a retiming or scaling or add some text to force premiere to re-encode prores to notice this, otherwise it will just copy the frames. And you should check it somewhere which is not adobe, because this happens at the reading of the file, not at the export. This is true for me as well as for all the editors, who use premiere 2020 I worked with, and when we choose pre-conformed EDL pipeline. And this is also true for clean up artists who use After Effects 2020 I worked with.
  13. If you're going to use the monitor with something like decklink, check if the monitor you're going to buy supports 24 fps HDMI input. Not all the monitors do.
  14. I made a video showing different ways of adjusting white balance and explaining why some of them never work. I'm not sure if this is ok to post my videos here and I'm definitely not going to do this every time I make a new video. But I'd really like to share this one with others at the best color grading community. Nothing new for professional colorists of course. But I think beginner colorists will probably find it useful. In short: An explanation of what is going on under the hood of Camera Raw and chromatic adaptation plugin. What's wrong with the eyedropper in Resolve and when it should be used. Limitation of the offset control as a white balance and exposure tool. Making primary corrections in a scene linear color space.
  15. Netflix recommends using ACEScct instead of ACEScc. I like the idea of a real logarithmic transfer curve of ACEScc, but unfortunately it has noticeable artifacts in the shadows. It's less noticeable with Alexa, but I often get bright pixels in the shadows with Red cameras as Margus mentioned. So I stick with ACEScct.
  16. Totally agree. Tons of bugs in color tools aren't fixed from one version to another. Nodes colorspace work wrong in ACES: timeline colospace is rec709 gamma 2.4 instead of ap1 acescc(t). Node AP1 color space change white point which is useless. Gamut mapping works wrong in ACES. Canon Cinema Gamut have wrong white point in CST plugin. Unusable WB eyedropper and colorchecker matching in ACES. I reported most of these bugs with no luck. Hope someone from blackmagic read this post and finally add these bugs to a schedule for fixing. Finally they added a possibility to type in numbers for color mixer and to generate 65x65x65 LUTs. But most of improvements are for the new cut page. When Scratch will have groups for grading, more intuitive edit page and will give more freedom with color spaces, I probably switch to it. I've tried Scratch some time ago. It reacts IMMEDIATELY when I adjust color wheels while playing the footage!
  17. A look comes as a global correction for the whole program or a scene at timeline or a group level accordingly, if you're in Resolve. Skin adjustments (hue-vs-hue for example) go at per clip level after primary per-clip basic adjustments (color balance, exposure, contrast). If you need a greenish look, usually the skin tone should be greenish. But if you decide to keep the skin tone natural as a creative decision, you should do it at a look logical level. In Resolve it would be somewhere in group or timeline nodes. But keep in mind that more crappy shot footage you got, less stylized look you can go with, if you don't want to let viewers notice color grading instead of the actual movie. But director is the boss. And usually he knows better how final movie should look to tell his story even if you think it looks terrible and you could make it look exactly like those beautiful stills on Company3 instagram. Main approach is to do as much as possible at a global corrections level. In perfect scenario you create one master look based on gray card white balanced test footage shot on chosen camera and lens by DP before the actual shooting. Then you bake it in a LUT. At the production stage DP creatively adjusts camera WB and exposure for each scene looking through the look LUT on field monitor. Then everything is 100% perfect. No need to do any color grading. Of course in reality it is impossible. But from that point of view it's easier to understand, what are the main tools you should use at each logical level of corrections. So at per-clip corrections level you should do as much as possible using color balance and exposure/contrast looking through the global look. And after that, if there is still enough time, you can go deep into problem solving with qualifying, rotoscoping and all the other time-consuming things.
  18. For strong greenish look I'd probably use (and I did) fuji 3513 LUT and wheels to make green tint even stronger. Also I often use RGB Mixer for creating looks. Usually when director wants strong teal orange. But I always try to add anything for creating any look before and through a print film LUT or something similar to what print film does. I'm writing an article about how I made LUTs for a netflix show. Lots of screenshots with all the settings. But I write in English really slow and bad. Some day, when I finally finish it, I put a link here. Actually I use 3x3 matrix in DCTL format. But it's the same thing as a RGB mixer.
  19. If you attach an example of your grade, probably I could say something more specific than basic "good source and good color balance is a key"
  20. I never do this. And can't recommend it as a part of a look corrections. Primary color balance and exposure/contrast is the main tool for everything. You should get a good looking image just with this looking through a look LUT or a manually created look. Sometimes you can get an unwanted reddish or yellowish skintone. And usually this is the only reason I do something with the isolated skin. But often I don't even isolate it. Just make some hue adjustments with hue vs hue curve at per clip corrections level. Usually it's ok if your skintone aren't at the skintone line on vectorscope. It always depends on overall color of the shot. Maybe the lighting is yellow and director decide to keep it. So camera WB is set in a way to preserve this yellow tint. In this case it would be wrong to select the skin tone and make it look neutral.
  21. Actually I don't see any special 'secret' color grading technique that makes it to look like this. What you see here is a good lighting and lens. Probably also a filter. Something like a promist. I'm not a DP so I can be wrong here. Of course it can also be a glow effect at post production. Talking about actually colors. Again, looks like a usual print film LUT or manual corrections that replicate all these typical things of a print film like shifting blue to cyan, yellow to orange, skintone to red (brought back before this by overall color balance tinted to yellow/green), RGB curves, soft clipped highlights and so on. Also midtones have more contrast than shadows and highlights. Especially the first screenshot. Really looks like kodak 2383. I know, it looks like there is some special technique for this look. But what you see is already baked on source. It's shot on camera by a good DP. Good source almost impossible to ruin at color grading. It looks good whatever you do.
  22. Can you attach the screenshot? I'm not sure I understand what you mean. I searched for "creamy skin look" and found pics with top front lit faces. Probably what you mean is just a soft box placed above the camera for flat lit commercial beauty shot.
  23. @Daniel Tuerner In HSL and HSV saturation works differently. In HSL desaturating makes colors darker. In HSV - brighter. So at first I make colors darker by desaturating in HSL. And then I also make them darker by increasing saturation in HSV. In another video I select lowest saturation colors, because they are basically almost neutral colors (with little to no saturation). Then I tint them with a strong and noticeable green fill by curves. This allows me to see all neutral gray colors in green. Actually not just true neutral gray colors, but also pixels with a little bit of saturation. They are not neutral of course, but with any footage shot on real camera this is ok and even preferable because of noise and lots of other imperfections of the real world. If I made a node tree which only indicates real true neutral gray, that would be unusable. For example even on expensive color checkers dark gray and light gray patches have slightly different tint. You can't make them all look 100% neutral by using only WB control in RAW or RGB Gain operation in linear gamma. And in my example (macbeth colorchecker) neutral patches are not even designed to be actually neutral. Only 2 or 3 of them are supposed to be actually neutral. I don't remember which ones exactly, but definitely not the brightest one. Also I forgot to set de-noising to highest quality in the video. This is essential. Otherwise denoiser can desaturate some colors and that makes the whole node tree useless. I added this as a pinned comment.
  24. Can VFX department confirm, that source footage and vfx shot match 100% on their side? You should compare files with and without vfx. Probably this isn't their fault. Maybe they received incorrectly encoded prores. For example Premeire Pro has some issues with video/full levels on DNxHR 444 import. Maybe somebody did something wrong somewhere in the pipeline before VFX. Import a shot that VFX received for their work and import the shot with their work done and compare it.