Dmitry Lobaskov

Dehancer - film emulation plugin for Davinci Resolve. Film profiles, halation, bloom, grain and many more. Free 2 week trial.

Recommended Posts

Good day everyone,

New update for Dehancer is out.

What's new?

  • OpenCL support for Windows / AMD
  • CUDA performance greatly optimised (20-30% faster now)
  • ACES Flase Colors bugfixes for macOS & Windows
  • Fixed Total Impact behavior in Gate Weave tool for macOS & Windows (now Impact reduces motion amplitude instead of ‘opacity’ in this tool)
  • Other bugfixes for macOS & Windows
  • Optimizations for macOS & Windows

This is a free upgrade for all Dehancer license owners.

If you are not Dehancer license owner yet, you can buy license on our website.

IMG_4960.JPG

Link to comment
Share on other sites
(edited)

Meet Fujifilm Instax film profiles in Dehancer!

Two versions are included:

  1. Fujifilm Instax
  2. Fujifilm Instax (Digital Intermediate)

The first profile reproduces Instax colors naturally (by means of Dehancer unique film sampling technology).

The second profile represents Fujifilm Instax color in ‘hybrid‘ process, where the scanned print is additionally edited with black and white levels correction.

Just push ‘Update’ button in your Dehancer plugin.

Download plugins or buy license:
www.dehancer.com

IMG_5033.JPG

Edited by Anfisa Zelentsova
Link to comment
Share on other sites
(edited)

Hello everyone,

We released a new update - Dehancer. 4.2.1

What's new:

  • Now compatible with DaVinci Resolve 17.3
  • Fixed Lut Generator behavior with Camera Source
  • Profile database updated
  • Optimisations for MacOS and Windows

NOTE: Please download and update profiles after installation (Download button will appear inside the plugin).

Download:
https://www.dehancer.com/#download

IMG_5671.JPG

Edited by Anfisa Zelentsova
Link to comment
Share on other sites

Hello everyone,

We're glad to announce that the beta test for  Dehancer Pro 5.0.0 is open!

Dehancer Pro 5.0.0 Beta 7

  • Print profiles with Target White setting
  • Kodak Vision 2383 Color Print Film profile
  • Kodak Professional Endura Glossy Color paper profile
  • Cineon Film Log support
  • DVR WG/Rec.709 source input
  • Source white balance (Temperature and Tint compensation)
  •  Optimizations


You can install beta in parallel with existing versions. 

More information here: https://blog.dehancer.com/beta-testing/


Beta available for Mac and Windows Cuda at the moment.

 

Dehancer_5_Beta.jpg

Link to comment
Share on other sites

Good day everyone, 

Dehancer 5.0.0 update is finally out!

What's new in DEHANCER PRO DEHANCER PHOTO EDITION:

*Print profiles with Target White setting
*Kodak Vision 2383 Color Print Film profile
*Kodak Professional Endura Glossy Color paper profile
*Cineon Film Log support
*DVR WG source input
*Source white balance (Temperature and Tint compensation)
*Optimizations

What's new in DEHANCER LITE:

*Source white balance (Temperature and Tint compensation)
*Optimizations

You can install new version together with existing versions.

This is a free update for Dehancer plugin owners.

Download & Get FREE Trial:
https://www.dehancer.com/#download

Buy license:
https://www.dehancer.com/store

 

photo5188184850367101466.jpg

Link to comment
Share on other sites

Good day everyone,

We are thrilled to announce our NEW plug-in: Dehancer Film 1.0.0 for Adobe Photoshop / Lightroom Classic

For macOS only at the moment.

Our team would like to offer a free 2-week trial for anyone who is curious to try. Head over to our website to get yours:
https://www.dehancer.com/store/pslr

You can learn more about the installation of this plugin in our dedicated article:
https://blog.dehancer.com/articles/dehancer-film-plugin-for-adobe-photoshop-and-lightroom-classic-installation/

photo5239964533706045876.thumb.jpg.ba5f2570f6aa291035b5d1589074ae34.jpg

Link to comment
Share on other sites

Hello everyone, 

Here's a new test our team did!

Check the article about this test and download source files and grade in our blog:
https://blog.dehancer.com/articles/dehancer-vs-16-mm-film/

Test shots of the same scene captured with 4 different cameras:

* ARRIFLEX 416 Plus with Kodak Vision 3 500T 7219 16 mm
* BlackMagic Pocket 6k (Film Log with Gen5 Color)
* Red Epic (IPP2)
* Kinefinity Mavo LF (CinemaDNG) 

Film developed in MOSFILM film laboratory (https://en.mosfilm.ru/dept/filmlab/), scanned to Cineon Film Log and graded in Dehancer with Kodak Vision Color Print Film 2383 profile.

Digital footages graded in Dehancer using Kodak Vision 3 500T film profile and Kodak Vision Color Print Film 2383 profile.

Lenses used:

* Ultra Prime for Arriflex
* DZOfilm pictor zoom for digital cameras

Link to comment
Share on other sites
2 hours ago, Anfisa Zelentsova said:

Hello everyone, 

Here's a new test our team did!

Check the article about this test and download source files and grade in our blog:
https://blog.dehancer.com/articles/dehancer-vs-16-mm-film/

Test shots of the same scene captured with 4 different cameras:

* ARRIFLEX 416 Plus with Kodak Vision 3 500T 7219 16 mm
* BlackMagic Pocket 6k (Film Log with Gen5 Color)
* Red Epic (IPP2)
* Kinefinity Mavo LF (CinemaDNG) 

Film developed in MOSFILM film laboratory (https://en.mosfilm.ru/dept/filmlab/), scanned to Cineon Film Log and graded in Dehancer with Kodak Vision Color Print Film 2383 profile.

Digital footages graded in Dehancer using Kodak Vision 3 500T film profile and Kodak Vision Color Print Film 2383 profile.

Lenses used:

* Ultra Prime for Arriflex
* DZOfilm pictor zoom for digital cameras

Interesting. Thanks for putting in the work. I think it demonstrates that the colour science is quite a ways off. Hopefully that means a better and more accurate 5219 profile and under the hood colour science will be in Dehancers future. It would be great if a company would be willing to go to the lengths Steve Yedlin did to profile his film path, particularly in how the cameras were profiled etc but in terms of the public domain it seems no one has gone to those lengths yet, or had the knowledge for how to do so.

Link to comment
Share on other sites
35 minutes ago, Keidrych wasley said:

I think it demonstrates that the colour science is quite a ways off

Agree, Dehancer is guesswork at the best. I beta tested Ravengrade over the weekend, it seriously put all the "film profiling" apps to shame.

Link to comment
Share on other sites
17 hours ago, Keidrych wasley said:

Interesting. Thanks for putting in the work. I think it demonstrates that the colour science is quite a ways off. Hopefully that means a better and more accurate 5219 profile and under the hood colour science will be in Dehancers future. It would be great if a company would be willing to go to the lengths Steve Yedlin did to profile his film path, particularly in how the cameras were profiled etc but in terms of the public domain it seems no one has gone to those lengths yet, or had the knowledge for how to do so.

 

17 hours ago, Tom Evans said:

Agree, Dehancer is guesswork at the best. I beta tested Ravengrade over the weekend, it seriously put all the "film profiling" apps to shame.

I'd love to hear more of your thoughts/criticism on this sample. I'm still fairly new to film emulation but have read a lot of Yedlin's papers/articles and watched his demos and would like to learn more about what Dehancer could improve on.

To my eye, some cameras (like the BMPCC 6K) have some clear differences in luminance and saturation (like in the red of the shirt) but it seems correctable. Even Yedlin's examples/matches had noticeable differences to the naked eye, which he admitted as such. 

I'm not familiar with Ravengrade but I'll check it out!

Link to comment
Share on other sites
On 12/1/2021 at 2:45 PM, Keidrych wasley said:

Interesting. Thanks for putting in the work. I think it demonstrates that the colour science is quite a ways off. Hopefully that means a better and more accurate 5219 profile and under the hood colour science will be in Dehancers future. It would be great if a company would be willing to go to the lengths Steve Yedlin did to profile his film path, particularly in how the cameras were profiled etc but in terms of the public domain it seems no one has gone to those lengths yet, or had the knowledge for how to do so.

 

On 12/1/2021 at 3:23 PM, Tom Evans said:

Agree, Dehancer is guesswork at the best. I beta tested Ravengrade over the weekend, it seriously put all the "film profiling" apps to shame.

 

On 12/2/2021 at 8:35 AM, Travis Ward said:

 

I'd love to hear more of your thoughts/criticism on this sample. I'm still fairly new to film emulation but have read a lot of Yedlin's papers/articles and watched his demos and would like to learn more about what Dehancer could improve on.

To my eye, some cameras (like the BMPCC 6K) have some clear differences in luminance and saturation (like in the red of the shirt) but it seems correctable. Even Yedlin's examples/matches had noticeable differences to the naked eye, which he admitted as such. 

I'm not familiar with Ravengrade but I'll check it out!

Dear Sirs,

On behalf of Dehancer team I'd like to explain some key points about the plug-in:

  1. The plug-in offers camera profiles solely for the convenience of the creative process and they are aimed primarily at an aesthetic interpretation of the source material, and not technical matching. At the same time the peculiarities of the color rendering of different cameras can be preserved.
  2. When working with color, the technical quality of color separation for each specific camera plays a decisive role. Color separation is primarily determined by the density of the bayer color filters on the matrix. Denser filters - better color separation, but less light and more noise. If you make the filters more transparent, then the color will become worse, but with more light and less noise. For example, in RED cameras, the sensors are equipped with denser filters, which give better color separation, but less light, more noise, more demands on lighting and correct exposure.

    No existing camera sensor is capable of fully matching the color separation capabilities of film. Therefore, the best result we can achieve is always limited by the difference in color separation between the media sources.
Link to comment
Share on other sites
20 minutes ago, Anfisa Zelentsova said:

 

 

Dear Sirs,

On behalf of Dehancer team I'd like to explain some key points about the plug-in:

  1. The plug-in offers camera profiles solely for the convenience of the creative process and they are aimed primarily at an aesthetic interpretation of the source material, and not technical matching. At the same time the peculiarities of the color rendering of different cameras can be preserved.
  2. When working with color, the technical quality of color separation for each specific camera plays a decisive role. Color separation is primarily determined by the density of the bayer color filters on the matrix. Denser filters - better color separation, but less light and more noise. If you make the filters more transparent, then the color will become worse, but with more light and less noise. For example, in RED cameras, the sensors are equipped with denser filters, which give better color separation, but less light, more noise, more demands on lighting and correct exposure.

    No existing camera sensor is capable of fully matching the color separation capabilities of film. Therefore, the best result we can achieve is always limited by the difference in color separation between the media sources.

Perceptually matching digital sensors to film has already been demonstrated by Steve Yedlin. Further to that, Star Wars The Last Jedi was shot 50/50 film and digital, with cutbacks to the same shot on film and then digital. As Steve Yedlin described it, "It's the best display prep demo".  If digital "can't match the color separation of film", how was this possible and how is it possible that he demonstrates just that not only via his work, but also in the Display Prep Demo, the Display Prep Demo Follow Up and the Resolution Demo? Using a matrix and tone curve is not enough to match one digital sensor to another, and certainly not enough to match eg an Alexa to a negative scan. The lack of a match in Dehancer is not the problem of digital sensors, it's the methods being used to gather the data and the mathematical implementation of that data. If you are using a matrix and tone curve to match cameras or filming color charts, and your source film data set is from limited measurements of color charts printed to and then measured on photographic paper then I think that would be the source of any mismatch, not the fault of the digital sensor.  I'm sure there is good reason Steve Yedlin used a SkyPanel and 6000 measurements to profile the Arri digital sensor and 5219 rather than color charts or any other method.

I enjoy Dehancer, however just yesterday I was trying to use Dehancer for grain, but finding that deep reds are shifted considerably to the degree that I could not use it and don't want to have to 'correct' everything back. I have everything switched off here, and all grain settings are at 0. The transform into aces is not the cause, just turning grain on and off creates this shift. 

Grain Off:

1439020017_GrainOff_1_34.3_POST.thumb.jpg.623f75d68f3df5b0956f67c2518a1e98.jpg

 

Grain On:

1448046819_GrainOn_1_34.4_POST.thumb.jpg.d73bab0b547c7bd8edee864adcd022c0.jpg

  • Like 1
Link to comment
Share on other sites
2 hours ago, Keidrych wasley said:

Perceptually matching digital sensors to film has already been demonstrated by Steve Yedlin. Further to that, Star Wars The Last Jedi was shot 50/50 film and digital, with cutbacks to the same shot on film and then digital. As Steve Yedlin described it, "It's the best display prep demo".  If digital "can't match the color separation of film", how was this possible and how is it possible that he demonstrates just that not only via his work, but also in the Display Prep Demo, the Display Prep Demo Follow Up and the Resolution Demo? Using a matrix and tone curve is not enough to match one digital sensor to another, and certainly not enough to match eg an Alexa to a negative scan. The lack of a match in Dehancer is not the problem of digital sensors, it's the methods being used to gather the data and the mathematical implementation of that data. If you are using a matrix and tone curve to match cameras or filming color charts, and your source film data set is from limited measurements of color charts printed to and then measured on photographic paper then I think that would be the source of any mismatch, not the fault of the digital sensor.  I'm sure there is good reason Steve Yedlin used a SkyPanel and 6000 measurements to profile the Arri digital sensor and 5219 rather than color charts or any other method.

I enjoy Dehancer, however just yesterday I was trying to use Dehancer for grain, but finding that deep reds are shifted considerably to the degree that I could not use it and don't want to have to 'correct' everything back. I have everything switched off here, and all grain settings are at 0. The transform into aces is not the cause, just turning grain on and off creates this shift. 

Grain Off:

1439020017_GrainOff_1_34.3_POST.thumb.jpg.623f75d68f3df5b0956f67c2518a1e98.jpg

 

Grain On:

1448046819_GrainOn_1_34.4_POST.thumb.jpg.d73bab0b547c7bd8edee864adcd022c0.jpg

 

Dear Keidrych,

Mr. Yedlin has successfully solved a very particular technical case based upon the well-defined and strictly determined conditions. Dehancer, on the contrary, is made to solve a much more multipurpose and challenging task, involving more unknown variables in the input, while delivering the consistent outcome which can be easily altered to your taste, resulting in a wide range of possible creative variations, aesthetically pleasing and technically accurate.

Within this context, increasing the number of samples does not solve the aesthetic task and makes the entire method less versatile, i.e. more demanding for strict adherence to the conditions. In this case, according to our observations, increasing the number of samples may subtly refine the color accuracy but it is more likely to degrade the overall aesthetics and smoothness thus limiting the universal applicability of the tool under a wide range of input conditions.

And of course, all this does not change the fundamental impossibility of distinguishing between two neighboring hues in postprocessing if they were not initially separated when shooting.

As for the grain, it is not superimposed on the image. The image consists of grain. Consequently, reliable simulation is impossible without deconstructing and restructuring the image. In particular, the black and white points are altered (otherwise there would be no visible grain in the shadows and highlights, which does not correspond to reality). Obviously, when the grain is applied, the color changes slightly as well. This is an indication that Dehancer simulates grain just as if you were shooting on film.

Link to comment
Share on other sites
9 hours ago, Anfisa Zelentsova said:

 

Dear Keidrych,

Mr. Yedlin has successfully solved a very particular technical case based upon the well-defined and strictly determined conditions. Dehancer, on the contrary, is made to solve a much more multipurpose and challenging task, involving more unknown variables in the input, while delivering the consistent outcome which can be easily altered to your taste, resulting in a wide range of possible creative variations, aesthetically pleasing and technically accurate.

Within this context, increasing the number of samples does not solve the aesthetic task and makes the entire method less versatile, i.e. more demanding for strict adherence to the conditions. In this case, according to our observations, increasing the number of samples may subtly refine the color accuracy but it is more likely to degrade the overall aesthetics and smoothness thus limiting the universal applicability of the tool under a wide range of input conditions.

And of course, all this does not change the fundamental impossibility of distinguishing between two neighboring hues in postprocessing if they were not initially separated when shooting.

As for the grain, it is not superimposed on the image. The image consists of grain. Consequently, reliable simulation is impossible without deconstructing and restructuring the image. In particular, the black and white points are altered (otherwise there would be no visible grain in the shadows and highlights, which does not correspond to reality). Obviously, when the grain is applied, the color changes slightly as well. This is an indication that Dehancer simulates grain just as if you were shooting on film.

Thankyou for your reply. I have some further thoughts...

You’re making an assumption that Steve Yedlin used all of those data points in his transform. The data points are there so that you know what something should look like, this does not mean they are all used. In order to keep the transform smooth it is likely that fewer points are used than measured. In any case those 6000 samples were to cover a wide exposure scale, there were 230 odd colour points from the chromaticity diagram. If we were to break a hue into 16 that would only be about 14 data points across a hues saturation range, the rest of the data is the exposure range because Steve went from -8 to +3 stops in half stop increments. So with that in mind, 14 points per hue slice/segment isn’t that much and certainly not when it comes to the amount of data points actually contained within eg a 32x cube. So it becomes more about having smart scattered data interpolation rather than the number of points?

 

Regarding hues I understand it can be the case that one camera may see two hues where a second sees one, but is for example the Alexa fundamentally flawed enough not to be able to create a good perceptual match with film? I think there is a reason Steve went as far as creating a much more detailed spectral response model of the Alexa sensor. If you are just filming charts I don’t think that will cut it. In your examples the red of the Mavo is way off, are you suggesting it is not possible to get the reds to be correct with a more accurate transform? As far as I can imagine, using colour management like ACES will lead to this type of inconsistency because aces just uses a matrix and tone curve, there needs to be a more rigorous under the hood colour science employed?

 

Whilst I can well understand it is the case that some cameras are not able to separate hues as well as others, I can only imagine the Alexa captures enough information to create a good perceptual match, as Steve demonstrated. I note however that products like emotive color, also based on more in depth color science and data collection have created excellent matches to arri color science so I don’t see why the greater body of a look can’t be implemented across multiple cameras as long as the data collection and transform is rigorous enough. The issue as far as I see it is having more rigorous methods for taking the input camera into a ‘neutral’ working space from which you can then apply the look, which means having your own custom color science. A tall order I appreciate.

 

When you say that Dehancer has to solve much more variation on the input, surely the case should be that the user provides dehancer with correctly exposed images, and if not it is for the user to balance the image before hitting dehancer. As far as I see it, the task with Dehancer should be to convert a cameras spectral response and tone curve into a ‘neutral’ space, and then apply the film transforms whilst preserving mid grey throughout. From my understanding this is the path Yedlin took and I don’t see why that is a very particular technical case? Difficult and labour intensive yes but not particular or ‘restricted’ etc. If each input camera is well measured, it can be transformed into a ‘working space’, and from that working space looks can be applied that maintain 18% grey - ACES already attempts to do this. It just takes rigorous and difficult colour science to achieve this. I think saying Yedlin’s methods are some sort of edge case is a bit of an easy way out, and in my mind he demonstrates it is possible by perceptually matching multiple cameras to his target look - red, arri, 35mm, 65mm, Sony etc etc. I mean the whole body of his work and message is that it’s not about the camera so much as the math in the transform. As long as the camera measures enough information it can be whatever you want it to be, and in my view he went a long way towards demonstrating this.

 

Regarding your grain emulation how are you able to verify that grain changes colours when there is no such thing as film without grain to compare to? How do you know for example that grain adds yellow to deep reds? What are you using for comparison to make that judgement? I think any colour transform should be kept out of grain, or at least let the user decide with another slider. Your model also clips blacks which I’m not a fan of and have run into trouble with, again I think let the user decide whether their blacks are going to be clipped. I find all this particularly frustrating because your grain model is otherwise rather good and it’s great that you do not take the overlay approach. 

Link to comment
Share on other sites
On 12/2/2021 at 8:35 AM, Travis Ward said:

I'm still fairly new to film emulation but have read a lot of Yedlin's papers/articles and watched his demos and would like to learn more about what Dehancer could improve on.

I don't want to talk Dehancer down, but it's not accurate film profiling and that's why I call it guesswork. Ravengrade doesn't claim to emulate film but you can easily tell that the looks are built on some advanced algorithms and there is some very interesting color channel cross coupling going on.

I'm also one of those who downloaded Mitch Bogdanowicz Kodak LUT's when they were available on Lowepost a couple of years ago and as far I can tell the "Niran" look in Ravengrade matches quite accurately. Mitch is not on their contributor list, but they state there are color scientists involved in creating their looks. ARRIs color scientist Florian Utsi is also on their contributors list and he is known for using some complex film print data from ARRI in his look creation so that might explain why Ravengrade feels much more cinematic than anything else I have tried.




 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.