Keidrych wasley

Members
  • Posts

    25
  • Joined

  • Last visited

Posts posted by Keidrych wasley

  1. 7 minutes ago, Adéyẹmi said:

    True that, but what I am simply putting out for someone new in colour is to start small. I have seen ppl two months into colour looking for tutorials on how to make their own DCTL. Its totally unnecessary to go complex when you dont understand the basics. Like any other artistic practice. You shouldn't jump to playing Chords if you dont know your basic Tonic Solfa. Ppl are too overwhelmed with the technology and tools.  They want to start big rather than basic.  The other day I had a conversation with a film student, she was quick to talk about kodak this and that, but didn't know what printer lights is. 

    Wholeheartedly agree. A simple tone curve, sat and LGG can be immensely powerful. 

    I just think understanding a tools limitations is an important part of learning how to use them.

    • Like 1
  2. 2 hours ago, Jamie Neale said:

    Really solid advice that, becoming a jedi of the LGG and extended tools will make for a much more versatile colourist. A good exercise is take your favourite LUTs deconstruct them and try emulating them using primaries. When I started I definitely went plug in/LUT heavy but now I find most of what needs to be done can be achieved with mostly primaries.

    LUTs/Plug ins still play a part for me but they end being for something very specific. That said, Raven Grade is a great addition to the toolbox and well worth checking out even if it's just to understand what the different sliders do to your image.


    Try recreating Yedlins LUT, or Mitch’s 2383 with LGG, you’ll soon come up against a wall of complexity. 

    Recreating a complex print film LUT using primaries is impossible.  The maths available in LGG can’t get you even remotely close, because the underlying math in LGG is too simple / basic. 

    This is a good reason why studying color science and LUTs has value because it helps to understand the underlying math and what is happening ‘under the hood’. 

    Learning to use LGG and learning about LUT building (along with the more complex color science that LUT building entails) are equally important and both of high value. 

    The sliders in Ravengrade from memory are mostly simple operations, eg adding gain in linear for exposure, changing tone curve for ‘volume’ etc. The complexity is in the data collection and the implementation of the data used to build the looks. For example Mitch supplied a print film emulation based upon a very large data set and implemented extremely smoothly. Take a look at some of those looks as a graphical 3D cube, there’s some very advanced color science on display such as extremely smooth outer gamut curvature etc that made me smile.
     

    • Like 1
  3. From memory I think you are right and well spotted, the Isabella 18% grey card is underexposed. Arri have stated that logC 18% grey should hit 398, or 400 on a 0-1023 10 bit scale and Isabella grey card does not hit this value. Personally I would say that for look development you ideally want to use many more shots across all sorts of lighting environments. One single shot tells you very little about how a show LUT travels.

  4. 9 hours ago, Anfisa Zelentsova said:

     

    Dear Keidrych,

    Mr. Yedlin has successfully solved a very particular technical case based upon the well-defined and strictly determined conditions. Dehancer, on the contrary, is made to solve a much more multipurpose and challenging task, involving more unknown variables in the input, while delivering the consistent outcome which can be easily altered to your taste, resulting in a wide range of possible creative variations, aesthetically pleasing and technically accurate.

    Within this context, increasing the number of samples does not solve the aesthetic task and makes the entire method less versatile, i.e. more demanding for strict adherence to the conditions. In this case, according to our observations, increasing the number of samples may subtly refine the color accuracy but it is more likely to degrade the overall aesthetics and smoothness thus limiting the universal applicability of the tool under a wide range of input conditions.

    And of course, all this does not change the fundamental impossibility of distinguishing between two neighboring hues in postprocessing if they were not initially separated when shooting.

    As for the grain, it is not superimposed on the image. The image consists of grain. Consequently, reliable simulation is impossible without deconstructing and restructuring the image. In particular, the black and white points are altered (otherwise there would be no visible grain in the shadows and highlights, which does not correspond to reality). Obviously, when the grain is applied, the color changes slightly as well. This is an indication that Dehancer simulates grain just as if you were shooting on film.

    Thankyou for your reply. I have some further thoughts...

    You’re making an assumption that Steve Yedlin used all of those data points in his transform. The data points are there so that you know what something should look like, this does not mean they are all used. In order to keep the transform smooth it is likely that fewer points are used than measured. In any case those 6000 samples were to cover a wide exposure scale, there were 230 odd colour points from the chromaticity diagram. If we were to break a hue into 16 that would only be about 14 data points across a hues saturation range, the rest of the data is the exposure range because Steve went from -8 to +3 stops in half stop increments. So with that in mind, 14 points per hue slice/segment isn’t that much and certainly not when it comes to the amount of data points actually contained within eg a 32x cube. So it becomes more about having smart scattered data interpolation rather than the number of points?

     

    Regarding hues I understand it can be the case that one camera may see two hues where a second sees one, but is for example the Alexa fundamentally flawed enough not to be able to create a good perceptual match with film? I think there is a reason Steve went as far as creating a much more detailed spectral response model of the Alexa sensor. If you are just filming charts I don’t think that will cut it. In your examples the red of the Mavo is way off, are you suggesting it is not possible to get the reds to be correct with a more accurate transform? As far as I can imagine, using colour management like ACES will lead to this type of inconsistency because aces just uses a matrix and tone curve, there needs to be a more rigorous under the hood colour science employed?

     

    Whilst I can well understand it is the case that some cameras are not able to separate hues as well as others, I can only imagine the Alexa captures enough information to create a good perceptual match, as Steve demonstrated. I note however that products like emotive color, also based on more in depth color science and data collection have created excellent matches to arri color science so I don’t see why the greater body of a look can’t be implemented across multiple cameras as long as the data collection and transform is rigorous enough. The issue as far as I see it is having more rigorous methods for taking the input camera into a ‘neutral’ working space from which you can then apply the look, which means having your own custom color science. A tall order I appreciate.

     

    When you say that Dehancer has to solve much more variation on the input, surely the case should be that the user provides dehancer with correctly exposed images, and if not it is for the user to balance the image before hitting dehancer. As far as I see it, the task with Dehancer should be to convert a cameras spectral response and tone curve into a ‘neutral’ space, and then apply the film transforms whilst preserving mid grey throughout. From my understanding this is the path Yedlin took and I don’t see why that is a very particular technical case? Difficult and labour intensive yes but not particular or ‘restricted’ etc. If each input camera is well measured, it can be transformed into a ‘working space’, and from that working space looks can be applied that maintain 18% grey - ACES already attempts to do this. It just takes rigorous and difficult colour science to achieve this. I think saying Yedlin’s methods are some sort of edge case is a bit of an easy way out, and in my mind he demonstrates it is possible by perceptually matching multiple cameras to his target look - red, arri, 35mm, 65mm, Sony etc etc. I mean the whole body of his work and message is that it’s not about the camera so much as the math in the transform. As long as the camera measures enough information it can be whatever you want it to be, and in my view he went a long way towards demonstrating this.

     

    Regarding your grain emulation how are you able to verify that grain changes colours when there is no such thing as film without grain to compare to? How do you know for example that grain adds yellow to deep reds? What are you using for comparison to make that judgement? I think any colour transform should be kept out of grain, or at least let the user decide with another slider. Your model also clips blacks which I’m not a fan of and have run into trouble with, again I think let the user decide whether their blacks are going to be clipped. I find all this particularly frustrating because your grain model is otherwise rather good and it’s great that you do not take the overlay approach. 

  5. 20 minutes ago, Anfisa Zelentsova said:

     

     

    Dear Sirs,

    On behalf of Dehancer team I'd like to explain some key points about the plug-in:

    1. The plug-in offers camera profiles solely for the convenience of the creative process and they are aimed primarily at an aesthetic interpretation of the source material, and not technical matching. At the same time the peculiarities of the color rendering of different cameras can be preserved.
    2. When working with color, the technical quality of color separation for each specific camera plays a decisive role. Color separation is primarily determined by the density of the bayer color filters on the matrix. Denser filters - better color separation, but less light and more noise. If you make the filters more transparent, then the color will become worse, but with more light and less noise. For example, in RED cameras, the sensors are equipped with denser filters, which give better color separation, but less light, more noise, more demands on lighting and correct exposure.

      No existing camera sensor is capable of fully matching the color separation capabilities of film. Therefore, the best result we can achieve is always limited by the difference in color separation between the media sources.

    Perceptually matching digital sensors to film has already been demonstrated by Steve Yedlin. Further to that, Star Wars The Last Jedi was shot 50/50 film and digital, with cutbacks to the same shot on film and then digital. As Steve Yedlin described it, "It's the best display prep demo".  If digital "can't match the color separation of film", how was this possible and how is it possible that he demonstrates just that not only via his work, but also in the Display Prep Demo, the Display Prep Demo Follow Up and the Resolution Demo? Using a matrix and tone curve is not enough to match one digital sensor to another, and certainly not enough to match eg an Alexa to a negative scan. The lack of a match in Dehancer is not the problem of digital sensors, it's the methods being used to gather the data and the mathematical implementation of that data. If you are using a matrix and tone curve to match cameras or filming color charts, and your source film data set is from limited measurements of color charts printed to and then measured on photographic paper then I think that would be the source of any mismatch, not the fault of the digital sensor.  I'm sure there is good reason Steve Yedlin used a SkyPanel and 6000 measurements to profile the Arri digital sensor and 5219 rather than color charts or any other method.

    I enjoy Dehancer, however just yesterday I was trying to use Dehancer for grain, but finding that deep reds are shifted considerably to the degree that I could not use it and don't want to have to 'correct' everything back. I have everything switched off here, and all grain settings are at 0. The transform into aces is not the cause, just turning grain on and off creates this shift. 

    Grain Off:

    1439020017_GrainOff_1_34.3_POST.thumb.jpg.623f75d68f3df5b0956f67c2518a1e98.jpg

     

    Grain On:

    1448046819_GrainOn_1_34.4_POST.thumb.jpg.d73bab0b547c7bd8edee864adcd022c0.jpg

    • Like 1
  6. 2 hours ago, Anfisa Zelentsova said:

    Hello everyone, 

    Here's a new test our team did!

    Check the article about this test and download source files and grade in our blog:
    https://blog.dehancer.com/articles/dehancer-vs-16-mm-film/

    Test shots of the same scene captured with 4 different cameras:

    * ARRIFLEX 416 Plus with Kodak Vision 3 500T 7219 16 mm
    * BlackMagic Pocket 6k (Film Log with Gen5 Color)
    * Red Epic (IPP2)
    * Kinefinity Mavo LF (CinemaDNG) 

    Film developed in MOSFILM film laboratory (https://en.mosfilm.ru/dept/filmlab/), scanned to Cineon Film Log and graded in Dehancer with Kodak Vision Color Print Film 2383 profile.

    Digital footages graded in Dehancer using Kodak Vision 3 500T film profile and Kodak Vision Color Print Film 2383 profile.

    Lenses used:

    * Ultra Prime for Arriflex
    * DZOfilm pictor zoom for digital cameras

    Interesting. Thanks for putting in the work. I think it demonstrates that the colour science is quite a ways off. Hopefully that means a better and more accurate 5219 profile and under the hood colour science will be in Dehancers future. It would be great if a company would be willing to go to the lengths Steve Yedlin did to profile his film path, particularly in how the cameras were profiled etc but in terms of the public domain it seems no one has gone to those lengths yet, or had the knowledge for how to do so.

  7. 1 hour ago, lewis jacobs said:

    Thanks for the detailed reply. I think you know what I mean when I say some images look really saturated and 3d and "thick" without looking over saturated  just looking like someone just raised the global saturation. I would love to learn more about specifically adjusting the hue/sat vs lum curves ect to achieve this this richness?

    The thing to bear in mind is when you increase saturation in a typical color corrector using the saturation knob what happens is you are effectively increasing the luminance of all the colours and pushing them all equally to the edge of your gamut. So the colours are becoming brighter and brighter and more garish and unnatural. Film on the other hand saturates differently - as saturation increases the luminance of colours decrease. Both the color corrector and film density saturation look saturated, but with a totally different perceptual feel. The color corrector saturation will make an image become more and more thin looking, the film density / subtractive saturation will become more and more ‘rich / thick’ and deep looking. Film will also shift hue along with reducing luminance eg deep low luminance reds might become more yellow instead of more bright and magenta etc. The way that film saturates is part of the reason it is still studied and modelled digitally. 

    • Like 1
    • Thanks 1
  8. 5 minutes ago, Adéyẹmi said:

    I think what u r trying to say is texture/rich contrast. A punchy looking image. Play with your luminance tools. Lift gamma gain. That's the best there is. Always lower your highlights. Keep them soft. Play with the mid tone to keep the image bulky, by lowering it too. You will start to see the difference. Keep a good reference. It's all in your lift gamma gain. Sat VS Lum is another good tool to play with. 

    Lift gamma gain makes 1D adjustments and is very simple and limited math.  Lift gamma gain will effect your tone curve and contrast. Tetra  / Paul Dore's Film Density will make 3D adjustments not possible with lift gamma gain. Tetra can make adjustments to color that are independent of contrast and do not effect contrast. So if you want to lower the perceptual density / luminance of specific colours to make them feel 'rich' without effecting your contrast, Tetra can be very powerful indeed. It's important to point out that 2 images with the same contrast can have very different feelings of color richness and color 'density' depending on how the colours are being effected, and to make those types of more complex adjustments requires tools capable of 3D color manipulation. Another tool for example would be Resolve Hue vs Lum curves.

    • Like 1
  9. Tetra is a dcil tool floating about online if you google it. Very useful for shifting the position of primaries, changing their colour and luminance without effecting the greyscale and doing so quite cleanly.

    • Like 2
  10. Density can be used as a term to describe the lowering of colour luminance which brings a perceptual richness to colours. If for example using Tetra you lower all three channels of red (red red, red green, red blue) you will get a perceptual ‘density’ to red as the luminance is lowered. There are tools that add saturation whilst lowering luminance to model a film style subtractive saturation which also brings a feeling of density to colour. 

    • Like 1
  11. Sounds like a gamma tag issue. What are the gamma tags of your exported file? If it’s 1-1-1 this will be why QuickTime and finder are washed out. QuickTime and finder will read the gamma tag and adjust for 1-1-1 (1.96 gamma ). VLC is not color managed and so it does not adjust the file based on the gamma tag metadata and instead defaults to gamma 2.4. If you set Resolve gamma tags to rec709 gamma 2.4 (1-2-1) QuickTime and finder will look correct however bear in mind that YouTube/Vimeo will enforce 1-1-1 gamma tags when they encode making the image washed out again. You can help this somewhat by encoding a gamma 2.4 to gamma 2.2 conversion Using a CST node at timeline level in Resolve. 

  12. On 7/16/2021 at 10:50 AM, Anfisa Zelentsova said:

    Beyond the visible and creative. That's how we would describe Dehancer False Colors plugin for DaVinci Resolve that you can get for FREE!

    Here’s why False Colors is a cool tool:

    • Great instrument for technical image control;
    • Emphasises some details that otherwise aren't visible to a human eye;
    • Can help to examine and adjust the exposure;
    • Especially useful in filmmaking;
    • Reveals invisible patterns;
    • Useful for adjusting skin tone exposure.

    To download the plugin, go to our website to the “Download & Buy” section: https://www.dehancer.com/store

    Already included in Dehancer Pro toolset.

    false_color.thumb.jpg.8ccaca1b5e20ef2a1846288b2e51a6a4.jpg

    Hi,

    Can you please reinstate the ability to maintain image sharpness when using grain. With 4.1 it is now impossible to keep the original image sharpness and you are forced into having a soft image. This was not the case with V3. I'm all for authenticity but please allow users to decide rather than enforcing a soft image.

    Thanks

  13. On 10/18/2020 at 7:37 AM, César Ricardo Alpuche said:

    Hi!

    I'm a new member of the Lowepost family and a young aspiring colorist who saved up to buy a reference grade monitor. However, I am not 100% of how to set up my new DM240 for accurate monitoring. Flanders themselves offer a tutorial generally describing the process of profiling for generation of a monitor LUT, however FS tech support told me no calibration was necessary and didn't mention anything regarding monitoring LUTs. From previous experience, I am led to believe that generating a monitor LUT to be used on Resolve Color Management is absolutely necessary for accurate monitoring, but what I do not know is exactly what settings to tweak in both the monitor and the calibration software, in my case, DisplayCAL since I can't afford FSI's own $550 USD iProfiler/CalMAN bundle. 

    Most of my work is for web, seldom any TV work. So should I mess with the monitor's luminance? Should I be using Video or Data as my SDI level? What settings should I tweak when setting up the calibration in DisplayCAL?

    I am aware that monitor calibration is no easy subject, but I'm willing to learn and offer any additional information that might help me with my journey as a colorist.

    Thank you!

    If you are working in Resolve leave video levels set to auto, and don’t change the Flanders. Set Resolve colour management to rec709 gamma 2.4. Set your Flanders to rec709 gamma 2.4. 
    Do your grade. 
    After you have finished if you want the grade to look the same once it is on Vimeo or YouTube and played back on a Mac or iPhone using safari or chrome follow these steps.

    Assuming you have exported a prores master file and will create deliverables from the master...

    Bring the master file back in to resolve. Add a colour space transform node. Set output gamma to rec709a. In the node key set to 75% (75% of the node effect). Then on the deliver page choose h.265. Set to main 10 (10 bit) and set the gamma tag to rec709a. 
     

    Once you upload this to Vimeo / YouTube you should find it looks almost identical to a Flanders when using a fairly modern MacBook / iPhone via chrome or safari. Becuase Firefox is not colour managed it will not look correct. 
     

    There can be a slight colour tint when using h.265 that is not there in h.264, if you get this you can add a node before the CST node and dial in a quarter to half point printer light plus or minus magenta.
     

    This above method should be used for online deliverable only, not for client review via QuickTime etc. 

     

    • Like 1
  14. 1 hour ago, KB Burnfield said:

    Corporate & online content

    I would suggest that an i1 display pro with the bundled i1 software calibrated to REC709 2.4 will be more than sufficient for the work you intend to do. You need to choose 'advanced workflows' in the i1 software, then choose the 709 preset and set luminance to 100cd/m2, and gamma to 2.4. Then grade in a darkened environment ideally with a bias light behind the monitor. If you start earning more money from grading and want to go a step up from this i'd suggest a blackmagic mini monitor for the SDI connection and one of the LCD Flanders Scientific monitors eg the AM210  / BM240 or second hand OLED CM250. All of these monitors come calibrated with a 3D lut, they then offer free calibration for the life of the monitor. 

    • Like 1
  15. 8 minutes ago, KB Burnfield said:

    Is there a mid price calibration unit you'd recommend instead of the x-rite ?

    What do you want the calibration for? What sort of work are you doing?  A step up would be the i1 pro with LightSpace Lte but you would need an Sdi monitor I believe though it may do icc profiles. 

  16. On 6/13/2020 at 5:12 PM, KB Burnfield said:

    I thought I'd bring this up since I'm about to invest in a X-Rite i1Display Pro to calibrate my monitors but also my MacBook Pro screen.

    One review I read said that it doesn't do well with even high end laptop screens. Has anyone had that experience?

    Any feedback on the X-Rite i1Display Pro versus another calibration device?

     

    thanks!

    I found the i1 Display pro with the xrite software when set correctly got my GUI monitor (just South Korean rehoused apple display) very very close to my Flanders Scientific Oled cm250.  It also worked well with a retina MacBook Pro. However, I wouldn’t use these for colour critical work though for client review or good-enough-to-get-by-in-a-pinch grading it’s fine. 

  17.  

    13 hours ago, Alexander Feichter - Schall und Rauch Bewegtbild GmbH said:

    Exactly, that's what I was wondering too, and asked in this forum then. 

    I paid full price back then. No regrets! 
    They  seem to be subtle at first sight by  just adding a little "look". But that's what matters. 

    I get more pronounced results from Pipeline, which leaves me confused when looking at your examples. Here are some examples of Alexa IDT vs Pipeline 2383. You can see my Pipeline 2383 exhibits more yellow greens, a warmer overall white balance, green in the blues etc. My workflow for these examples is Aces 1.1, Alexa-AcesCCT, Pipeline LUT, AcesCCT-REC709. 

    Alexa_IDTAlexa_IDT_2.thumb.png.df81644ae276378ad170b3e70bb13fda.png

    Pipeline 23832383_2.thumb.png.20483ec169759e8228e849551c3834b3.png

     

    Alexa IDTAlexa_IDT_3.thumb.png.40f5adda9f2f67ed370d2434257aeed4.png

    Pipeline 23832383_3.thumb.png.38b4e8a4393ad0313500de088a8652a9.png

     

    Alexa IDTAlexa_IDT.thumb.png.09c0694a5f2bd73e741cdda73702f177.png

    Pipeline 23832383.thumb.png.c3e5eea37c46b48b31fc4b68f0bc96b1.png

     

    Alexa IDTAlexa_IDT_4.thumb.png.24afe7261e8ec405b59663641c6edce5.png

    Pipeline 2383Pipeline_2383_4.thumb.png.493198d837af7444c674e22c50cc2838.png

     

    • Like 2
  18. 17 hours ago, Peter said:

    Cinegrain Pipeline is quite expensive for LUTs that don’t offer a demo or even side by side pictures on the site. Normally $999! How accurate are they to the actual film stocks?

    You can get them 50% off around black Friday. As to accuracy that's so hard to ever judge. Accurate to what? A film print can look so different depending on the settings used when it was processed / scanned. One thing for sure is the Pipeline LUTS look good and they are smooth artefacts wise. The only LUT i've seen that could demonstrate accuracy would be the one created by Steve Yedlin to emulate 5218 printed to 2383. I'm sure there are other accurate LUTS but they'll be propriety LUTS held by post facilities like efilm etc.

  19. On 9/6/2019 at 1:08 AM, Evan Kultangwatana said:

    Many thanks, John. As a customer of FilmConvert—which has been wonderful—I've been hoping for something to compete with the Koji LUTs, which are my gold standard but are rather limited. Hope you get a wealth of helpful customer response to create the best product possible!

    I would recommend trying Cinegrain Pipeline. I have Koji and have trialed Film Convert Nitrate. Koji has issues that create artefacts in some situations, Pipeline 2383 v2 can match the Koji with some small adjustments. As for Nitrate i could get something very similar to my favourite Nitrate LUT with the Pipeline LUTS and i preferred the Pipeline version. Pipeline LUTS are extremely smooth with no artefacts / banding etc at all. I do find their base setup too contrasty and a little on the warm side but this is easily adjusted. YMMV

  20. On 9/12/2019 at 2:05 PM, Anton Meleshkevich said:

    Usually you create a LUT for a monitor, not for a camera. LUT should be a 'log-to-video + grade' size17 LUT.

    Basically you don't want to bake  any cdl to a LUT, but your DP may want you to do that. So you probably should tell your DP to adjust temperature AND TINT in a creative way instead of just set and forget 5600 for daylight and 3200 for tungsten. Or you will have to bake cdl-like corrections (or more accurate color balance in linear gamma) to LUTs for each scene to make LUTs work properly (especially extreme teal-orange LUTs). Of course you can create cdl on-set. But I like to avoid this as much as possible. Just my personal preference.

     

    Also test your LUT with all kinds of neon lighting.

    Just to add a layer to the conversation. Many DoP’s use custom LUTS some will be designed for the camera eg a .aml for the newer Alexa’s. Other DoP’s will use a custom lut loaded into a monitor. Roger Deakins uses one LUT only and has done since he started working with the Alexa. He has made minor changes to saturation and contrast for some films. The LUT has contrast curve baked in so that the image on the monitor 95% represents the final look of the finished film. This lut is then taken into post production for a consistent look from set to grade. This is why eg Sicario was graded in a week. Roger mostly shoots at 3200 or 5600 and creates the look from the lighting rather than a white balance / tint change. I’m a fan of this ‘one light’ one LUT approach but it won’t work for everyone of course. In resolve you can create a lut simply by manipulating the image with nodes then right clicking the timeline still and exporting a 32 cube LUT which you can the convert to whatever in lattice or if for Alexa mini etc use their free color tool. In fusion you can create 64 cube luts etc and get even more precise. Personally i’ve worked in aces and created a lut from a modified cinegrain pipeline 2383 lut which I modified to my liking then converted for use with the Alexa mini in camera. I’ve been very pleased with the results. 

    • Like 1