Marc Wielage

  • Posts

  • Joined

  • Last visited

About Marc Wielage

  • Birthday 10/14/1954

Personal Information

  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Marc Wielage's Achievements


Collaborator (7/14)

  • Reacting Well
  • Very Popular Rare
  • First Post
  • Collaborator Rare
  • Week One Done

Recent Badges



  1. I see your point, but I don't agree. Think of Printer Lights as a way of adjusting LGG Shadows and Gain simultaneously. It has its uses, and you don't have to necessarily do it at the end of the chain. I agree it's more useful with Log images, but I honestly use it on a lot of stuff. In particular, when I have an operator who changes the lens exposure in the middle of the shot (which is pretty much a logarithmic change), I find a keyframed Offset usually will fix it, or at least minimize it so it's not too noticeable. So there's a lot of "it depends" to this. I'm very much a "if it works, it works" kinda guy: there's a lot of different ways to approach things, and often there are no absolutes beyond making the client happy and not winding up with a picture that looks stressed-out or distorted.
  2. We love the G-Tech "G-Speed" RAIDs, and they've been very reliable for us. The spinning drives get at least 1000MB/s, and the SSD models will go over 2000MB/s.
  3. Technically, it is more time but we'll just stack up the renders and I'll hit the button on my way out the door at the end of the day. If it takes 6-7 hours to do them all, it doesn't matter: I'll be safely home in my bed. We tend to work in reels, so the next day I'll stitch them all together (assuming it's a feature), but I make sure I check off the "Bypass Re-encode When Possible" option is turned on on the Delivery page. Usually I can get a flattened single file out of the 4-5 files in faster-than-real-time by the moment I get back into the office, maybe 45 minutes for a 2-hour film in 4K ProRes 444.
  4. We use some temp SNR noise reduction while we're color-correcting the show, to give us an idea as to what it'll look with NR. Then we turn off the temp NR for the render to a mezzanine format like ProRes 444 or 444HQ. Then we take the color-corrected (but no NR) mezzanine version and run it through Resolve again, with only Neat Video activated in a single node. We come up with 7 or 8 different settings for different kinds of scenes -- day interiors, day exteriors, night interiors, night exteriors, super-dark scenes, super-bright scenes, problem scenes -- and manually split the clips and add the NR-only correction from a PowerGrade bin. It won't run at speed, but we do before/after comparisons to make sure it looks good. If the shadows need to be adjusted -- they sometimes wind up a little high after NR -- we lower them. Once that's good, we render out what we call a "cc_NR" version (color-corrected noise-reduced), and that's what gets delivered as the final. We hang on to the "cc" (mezzanine) version in case there's any issues. This method has worked for at least 11-12 projects so far, including one I did last week. It does take more time, so it helps to have a fast computer. I just set up a whole stack of renders on the Mac Pro, kick them off at the end of my shift, and it chugs all night until they're done. I set up the OS to turn off the machine after X number of hours, knowing it'll be done by that time.
  5. Some books you can check out in order to learn more about color science: VES: "Cinematic Color" (free 52-page white paper) "Color & Mastering for Digital Cinema" by Glenn Kennel "Digital Cinematography: Fundamentals, Techniques, & Workflows" by David Stump "Color Reproduction in Electronic Imaging Systems" by Michael Tooms "Digital Video and HD: Algorithms and Interfaces" by Charles Poynton "The Reproduction of Colour" by Dr. R.W.G. Hunt "Color Mania: The Material of Color in Photography and Film" by Barbara Flückiger "Colour Cinematograph" by Adrian Cornwell-Clyne Chapman & Hall The best book on Film Lab color-correction I've ever read is this one: "Film Technology in Post Production" by Dominic Case The latter explains how film was color graded in the laboratory prior to television and digital. Some of the basic principles still apply today.
  6. Me personally, I try to only do NR after the initial correction, about halfway through the node tree. You can cache there and then still make subsequent trims, keys, windows, OFX plug-ins, and so on, and it won't slow you down. Having said that, when I encounter significant noise, we usually turn to Neat Video and render the whole show twice: once without any NR, and then a second pass with NR added on a scene-to-scene basis.
  7. Working on it! Note there are pros and cons with this approach, and there are occasions where you don't want a Fixed Node Tree. But for longform features & TV, they're ideal for working quickly and keeping a consistent look across many scenes.
  8. There's a lot of potential variables. Bear in mind that the specific lighting and exposure on set, plus the art direction (particularly the set design and color) will limit the potential of any grading you do. For example, the teal & orange look isn't necessarily possible with all kinds of images: color contrast happens when the background and foreground lend themselves to this kind of "pushing and pulling" the image. Downloading an H.264 from the net is problematic, because they will (as you discovered ) "fall apart" when you try to push them too hard. Try to do the grade with 12-bit or 16-bit Raw material, and I'd bet you can get there. Arri Alexa, Blackmagic, Red, and Sony all have free Raw files you can download and experiment with.
  9. Very scary. Many, many years ago (about 1980), I threaded up a 16mm neg on a Rank-Cintel MkIIIB telecine for a music video at Modern Videofilm here in Hollywood. I went back to the control room, sat with the director, and color-corrected the image as we laid it down to tape (which was the standard-def workflow back then). After the first reel was safely on tape, the director said, "hey, could we do that again with a different look?" I said, sure, and went to rewind the film at the machine. Much to my shock, I saw shards of emulsion and plastic on the base of the scanner. In my haste and nervousness, I had wound the film around the 35mm guides... which were supposed to be bypassed for 16mm, due to its narrower width. The 35mm guides gouged into the film and put a half dozen deep scratches in the frame, all the way through! Just as I was staring at the film and wondering how I was going to explain it, the director walked in, saw what was happening and said, "oh, I guess there's some equipment problems?" I gulped and nodded and said, "totally my fault," and profusely apologized. He shrugged and said, "eh, what we've already recorded looks fine. Let's just move on to the next roll. But don't scratch the next one." The director was totally unphased, was happy with what we did, we continued with the session, and it all ended up well. Needless to say, I was much, much, much more careful about loading 16mm on scanners after that.
  10. One thing you can do: highlight the clips, then right-click and select "Bypass Color Management." Then, using a Color Space Transform OFX plug-in, you can drop it into the first node and tell it what kind of camera made the ProRes file. I can't guarantee this will work every time, since it's possible when the ProRes was made, some kind of change or adjustment will happen, but it can work. Experiment with different settings and see if this gets you to a closer starting point. I should acknowledge Joey D'Anna of MixingLight for coming up with this "Roll Your Own" color management idea, which I think is very clever. I hadn't used CST nodes until his suggestion, but they've proven to be very useful.
  11. We have a tutorial on Fixed Node Trees now being worked on (literally as we speak), and I hope to have it done within the month. No promises, but it will give you about a dozen examples of different node trees you can use, explains the thought process behind each one, and also will explain how to build a Fixed Node Tree from scratch. One key is to understand the Image Processing Order of Operations. This is such an important topic, it's given its own chapter in the Resolve manual, Chapter 141, starting on p. 2806. The point is to understand how one node will affect the nodes coming after it, and how it's possible to damage the image if you (say) deliberately crush it in one node, then try to bring everything back in a subsequent node. This is particularly crucial if you use LUTs, which can have a destructive effect on the image. I'm also a stickler for labeling every node, and that's so that you can understand the signal flow and what each node is doing. This can be very important if you come back to a project six months or a year later, and can understand how and why certain shots were changed. Splitting up different kinds of functions can also help the client demonstrate how a shot was changed and (hopefully) improved, a node at a time.
  12. Chapter 8 of the v17 manual, "Data Levels, Color Management, and ACES" (starting on p. 181), does at least mention OOTF. There's a course here on Lowepost that covers Resolve Color management in detail:
  13. My suggestion is to try to keep things simple. I don't necessarily think working in a wide-gamut world will help you unless you plan some serious HDR deliveries. Having said that, Alexis Van Hurkman has an excellent 3-hour tutorial on Resolve Color Management, and it specifically covers wide gamut as well as the advantages and disadvantages of ACES vs. RCM: One thing I think is helpful is that he shows how to take a project completely corrected in SDR and then do a trim pass for HDR. I think this will be useful for certain situations where, long after the fact, the client decides to spend the extra money to have the colorist provide an additional HDR version. As far as matching different cameras goes, I think that's something you can already get with Color Space Transform nodes and actually work independently of color management or even LUTs, for that matter. As long as you have a calibrated display and a color-managed output, you can work just fine in a display-managed environment and get the whole project done. Knowing scopes and the peculiarities of specific cameras will help a lot.
  14. Beautifully said, Bruno. I always say, "the beauty of Resolve is that there's often at least 4 or 5 different ways to get good results. The key is to use the one with which you're comfortable, and the one that works the fastest (for you). I never tell another colorist how to work, because if they get good results, if the client is happy, and if the check clears... then there is no problem. It is possible to NOT work in ACES, but still deliver an ACES-compatible archival file at the end of the process if the client wants one. That's covered in the manual.
  15. "Most" movies is in the eye of the beholder. There's lots and lots of different ways to work nowadays. I think even Netflix will allow facilities to use other kinds of color management as long as you deliver ACES in the end. And you can deliver ACES-compatible files with Red Color Management 2. I do a lot of stuff manually, but much of what I do is just for Rec709. We are using RCM2, so I have the ability to change the pipeline if we wind up in HDR/Dolby Vision, but that still requires a trim pass. We've proven it works, so I'm confident it's a good way for us to handle sessions. I often say, "the power of Resolve is that it gives you multiple ways to do the same thing." You have to make the decision which method is best for you. As long as the final color is right and the files are acceptable, everything is fine.