Marc Wielage

  • Content Count

  • Joined

  • Last visited

Community Reputation

103 Good


About Marc Wielage

  • Rank
  • Birthday 10/14/1954

Personal Information

  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. You'd have to ask Mark. The link for the page with a shot of his node tree is here: My guess is that it's a key on the window. One trick of these fixed node trees is that you have to be very careful of anything adjusting level, because that will affect keys on down the signal path. For that reason, you either have to break out Parallel nodes or just structure the keys earlier in the chain. There are valid reasons to go either way.
  2. I have had cases where I pull a soft highlight key, qualify it carefully, and then will boost the levels a bit just to "pop" the scene more. I just had this happen a few weeks ago on a day-for-night scene where I had to drastically darken the shot, but wanted the headlights alone to pop out at a normal (for night) level. This had to be tracked carefully, but there's several ways to do it. Highlights are helpful sometimes, as are the Log controls -- it kind of depends if you're coming up or coming down.
  3. I have generally duplicated the shot on the edit page and then cut in a shape with an alpha to cover up the flaw, then color correct on that second clip to match levels, and then soften the shape to make it blend in better. Patch Replacer essentially does the same thing automatically, but without the benefit of manual control. Both are useful under the right conditions.
  4. This is often a bone of contention in the film restoration business. My take is you need to do NR last, because the contrast that happens in correction could exaggerate noise problems more in some cases than others. If you apply the NR as the initial node and then correct after that, it's better for caching but you will see unequal noise levels caused by different settings cut-to-cut and scene-to-scene. I think it's a decision that has to be made differently per project. I'm generally a fan of not noise-reducing unless we really need it, so I'll do it scene-wide for X number of shots, but then turn it off once the exposure goes back to normal. Added grain is kind of a separate issue.
  5. I haven't seen this problem in 15.3.1 on Mac OSX 10.14.5, either with the Mini Panel or the Advanced Panels. How many nodes? What specific hardware are you using? I typically use anywhere between 15 and 30 nodes, but there's a lot of "it depends" in there. I do run with quite a few nodes bypassed when they're not needed, but I certainly do a lot of enable/bypass actions when I'm doing a trim pass.
  6. Another thing you could do is qualify a key on upper-mids and highlights, soften it to reduce artifacts, add a little NR, and desaturate there. The effect is different than a Lum vs. Sat curve and (to me) not as destructive if you're very careful. I do this all the time when I need to subtly go in the opposite direction and desaturate blacks without artifacts. But I try not to push it too hard. Noted DP Steve Yedlin has some things to say about the "film look" and digital cameras, and he has some interesting theories and conclusions:
  7. Another recommendation for PixelTools: they did a terrific job at assembling together utilities, often-used nodes, and looks in one package. And unlike a LUT, they can be adjusted to work in any color space and camera format.
  8. I would point to the Oscar-winning film The Artist as an example where they not only digitally created "halation," but they went a step further and made modern 35mm color film look like 1920s B&W nitrate film. That's a very clever trick, and I think it helped sell the look and period of the film very well. Only Richard Deusy as Duboircolor in France knows exactly how they did it, but I think you can bet there were carefully-qualified keys, a glow filter of some kind, and some blur here and there. Great lighting and filtration in-camera probably helped as well. It's a great-looking film -- all shot on film, but (ironically) done at Red Studios on Cahuenga Blvd. in Hollywood.
  9. BTW, here's an interesting interview I missed a few months ago where Stefan was part of Panasonic's introduction of their new GZ2000 OLED display: This is the longest I've ever heard Mr. Sonnenfeld speak about anything.
  10. Stefan doesn't give many interviews, but he did chat with Steve Hullfish's book The Art & Technique of Digital Color Correction, and I found a lot of what he said enlightening. My take is that Sonnenfeld works very quickly and uses simpler techniques than a lot of people suspect. He does extraordinarily good work, and there are few other people in Hollywood (and everywhere else) who grasp the importance of client relations, sales, business, and technology and have the ability to balance all of them the way Stefan Sonnenfeld does.
  11. Yes, you can grab a still frame of the "good" image and then paste that within the keyframe area and that should recall the keyframed color correction. You have to kind of figure out a strategy for dealing with color fading. To tell you the truth, I don't think Resolve (or Baselight or Lustre or whatever) is the best tool for that job: you'd be better off doing kind of a "best light" and then sending the files through an MTI DRS or a similar restoration system just to "stabilize" the color fading, and then take those files and color correct them for a final look. The problem with film color fading is that it's non-linear, meaning that one side of the image is going to be more faded than the other, so you could wind up having to use a lot of power windows and masks on top of the keyframe issue. The labor involved would be ridiculous, to the point where it'd drive you mad. One thing I know that Lowry Digital did back in the day (when I was there around 2010-2012) was they would break the image down into RGB, then process and reduce the flicker and fading on a channel by channel basis. Once that was done, they would merge the RGB files back together again. That may be more complicated than what you want or need to do, but it's interesting to note that generally one channel is flickering or wavering more than the others, and that might give you a clue as to how to attack the problem.
  12. No, in Resolve the RGB Mixer has a completely different function. I tend to use Offset very early in the grade (usually followed with a Custom Curve), to get a broad overall adjustment, and once I get the image in what I call "quasi-Rec709 space," then I can start making more precise balances. I tend to use Pots (individual RGB controls) in the Primaries to start the adjustment, but you can make a good argument for other methods. I sometimes work with film-based projects where there is no Raw data per se (that is, no Raw adjustments), so sometimes I'll use either the RGB Pots or sometimes the RGB Mixer to fix color temperature problems. In particular, it's helpful if you have an underexposed Blue channel, and you can "steal" some information from R&G to give the Blue a cleaner signal to work with. This will help minimize noise in cases where the Blue channel is underexposed. There are a lot of different ways to work, and the beauty of Resolve (or Baselight or Mistika or any top-flight system) is you can choose one of a half-dozen different methods. As long as it works, the end justifies the means. I do tend to start with a very well-balanced picture first and then degrade it later on if we need to go to (say) an extreme blue look or a bleach-bypass look. I'm not a fan of starting with an image that leans off to one side early on in the signal chain, because the danger is that later on, you can wind up with distortion and noise because you're overdriving the signal (or worse, destructively crushing the signal and then being unable to normalize it in subsequent nodes or layers.
  13. Yes, Dan Moran is very good. (He's also of normal height, which I like.) I don't often get into "beauty grading," but most of what I know came from Dan's lessons, and they helped me immensely on a couple of projects. Dan also had some great ideas on how to mask and track complex objects, which boiled down to multiple shapes -- and that was a good lesson to learn. MixingLight is a terrific resource.
  14. Naw, we go with just the 2-up Display at the moment. It's been fine. It's rare I ever go beyond 25 nodes. 18 is more normal for the way I work. There are always exceptions: a few weeks ago, I did a 90-minute documentary in 12 hours in 5 nodes, whole thing, maybe 900 shots. It's more a time/budget exercise, not "how many nodes can I make?" The wide displays give me a headache because I'm breaking my neck on the keyframe window. I also like breaking out the external scopes to a 3rd display.
  15. You could also paste the clip on a dedicated timeline on the edit page and just repeat it over and over and over, with a fixed Composite Level and setting. There are pros and cons to either approach. Note you can also color-correct the grain to intensify it or reduce it.