Marc Wielage

  • Posts

  • Joined

  • Last visited

Everything posted by Marc Wielage

  1. I'm a year late at doing the tutorial for Lowepost, but here's a big node tree we came up for a project recently. Normally, about 90% of these are bypassed, and we only turn a node on when we need it, so be aware this illustration is not a practical shot of how it works in real life. But it'll give you some ideas: Note for beginners: I absolutely do not believe that you need a crazy/ridiculous/large number of nodes to do a project. In truth, 4-5 nodes do all the heavy lifting -- the rest is just icing on the cake. Sometimes, there's just no time (like if you have to do 500 shots in 8-10 hours), so you have to get by with maybe 6-7 nodes, tops, and just fly through as fast as you can. I average maybe 18-22 nodes on each shot per project, and on features it's the exact same node tree on every shot. That way, for one sequence -- say, 20 shots in a kitchen -- we do a global and copy everything over, then go in and make little trims so that it all matches. All nodes are labeled so if somebody else has to take over the project or do work later on, they'll understand the signal path and decisions made. I hope to have more to say about this for Stig in the future, so hang in there.
  2. Note you can assign a keyboard shortcut to Reference Wipe mode -> Gallery still, or Reference Wipe mode -> Offline video. Very easy to do. There are also dedicated buttons for this (for sure on the Advanced Panels).
  3. My apologies, it is not. I got swamped with work and also got hit with an ongoing illness, so it's a question of when I can get time.
  4. I've used the Baselight Blackboard before, and it's a great panel, but it only works with Baselight.
  5. How are you monitoring audio? Are you coming out of the UltraStudio? That should be precisely in sync with the image, and it should be fairly fast. I don't expect instantaneous anything in post, but I do have to have precise sync, for editing and for timing purposes.
  6. One thing you can try: use a Layer Mixer node to retain some of the fleshtones when you've introduced a lot of cyan (teal) in the picture. In truth, if the characters in a scene where in a teal-colored room, they wouldn't have perfectly natural skintone anyway, so the trick to me is to dial back the fleshtones so they aren't 100% natural, but maybe 50% there. There are a zillion tutorials out there explaining how to use Layer Mixers to qualify colors in a specific area (like fleshtones). Be aware that some film sets don't allow opportunities for an orange/teal (split-tone) look, particularly when there's a background color too close to skintone. In other words, the decision for this kind of look has to happen prior to production so that sets and locations are compatible with the final color contrast the filmmakers want. The Color Warper might also be a tool to help eliminate magenta and green, giving you a better Orange/Teal split. Be aware that it's possible to lean the shadows towards blue and highlights towards warmth without a lot of effort, provided the material is well-shot. The orange/teal look has existed for more than 20 years, long before Resolve and long before layer mixers were even available.
  7. Members are warned that this is a BETA release. Here are the steps I recommend for people upgrading Resolve (and this has held for several years): 1) launch Resolve 17 and backup Project Databases using the backup utility 2) export Keyboard Shortcuts 3) export all personal PowerGrades as DPX stills + DRX grades to specific folders (and LUTs if you must) 4) jot down Project Config and User settings (in case those don't make it over) 5) jot down Data Burn-In settings 6) jot down Custom Export settings on Deliver page 7) jot down custom Power Window presets 9) backup 3rd-party/custom LUTs 10) export all custom PowerGrades (with labeled stills) to a folder 11) de-install any 3rd-party OFX plug-ins (BorisFX, Sapphire, Dehancer, Beauty Box, Filmlook, Neat Video, etc.), and have the serial numbers ready when you install the new Resolve. 12) important: backup all current in-progress sessions as DRP files "just in case." I think it never hurts to do a complete backup of your boot drive so you could theoretically do a full restore and go back to Resolve 17 if need be. It is possible to run both versions at the same time if you had separate boot drives, Resolve 17 on one and Resolve 18 on the other, and used separate Project Databases, but the setup is tricky (and critical). You cannot run them at the same time on the same partition, because the new database will overwrite the old version. When you install Resolve 18, be aware that you may need to also update the Desktop Video driver, and you may need to update your GPU video drivers. As with any modern software, there's a chance your current hardware may not be enough to run 18. Check the documentation on Blackmagic's support website and make sure your CPU, available RAM, GPU, and drive speeds all meet their suggestion configuration specs.
  8. Blackmagic announced a new "Public Beta" release of Resolve 18 today: Fremont, CA, USA - Monday, April 18, 2022 - Blackmagic Design today announced DaVinci Resolve 18, a major new cloud collaboration update which allows multiple editors, colorists, VFX artists and audio engineers to work simultaneously on the same project, on the same timeline, anywhere in the world. DaVinci Resolve 18 supports the Blackmagic Cloud for hosting and sharing projects, as well as a new DaVinci proxy workflow. This update also includes new Resolve FX AI tools powered by the DaVinci Neural Engine, as well as time saving tools for editors, Fairlight legacy fixed bus to FlexBus conversion, GPU accelerated paint in Fusion, and more! DaVinci Resolve 18 public beta is available for download now from the Blackmagic Design web site. DaVinci Resolve 18 is a major release featuring cloud based workflows for a new way to collaborate remotely. Customers can host project libraries using Blackmagic Cloud and collaborate on the same timeline, in real time, with multiple users globally. The new Blackmagic Proxy generator automatically creates proxies linked to camera originals, for a faster editing workflow. There are new Resolve FX such as ultra beauty and 3D depth map, improved subtitling for editors, GPU accelerated Fusion paint and real time title template playback, Fairlight fixed to FlexBus conversion and more. DaVinci Resolve 18 supports Blackmagic Cloud, so customers can host their project libraries on the DaVinci Resolve Project Server in the cloud. Share projects and work collaboratively with editors, colorists, VFX artists and audio engineers on the same project at the same time, anywhere in the world. The new Blackmagic Proxy Generator App automatically creates and manages proxies from camera originals. Create a watch folder and new media is automatically converted into H.264, H.265 or Apple ProRes proxies to accelerate editing workflows. Customers can extract proxies into a separate folder for offline work. Customers can switch between camera original footage and proxies in a single click. With Blackmagic Proxy Generated proxies, DaVinci Resolve knows where in the file tree to find them, instantly linking to the camera originals in the media pool. Edit with proxies, then relink to camera originals to grade. DaVinci Resolve 18 adds intelligent media location management, so that when customers are collaborating customers can quickly link media to their unique file paths. Now customers don’t need to manually relink or search for assets when customers work remotely. The collaboration update also provides major performance enhancements if customers are using a secure private network. Get immediate updates of editorial and color changes when collaborating on a remotely hosted project library. Now creative decisions can be made in real time based on the latest changes. Live stream their DaVinci Resolve Studio viewer and display on a remote computer monitor, or a reference grading monitor, via DeckLink to anywhere in the world. The low latency, high quality 12-bit image is ideal for remote editing or color grading giving customers instant feedback on changes. DaVinci Resolve 18 features incredible new tools for colorists. Located in the magic mask palette, the new object mask is able to recognize and track the movement of thousands of unique objects. The DaVinci Neural Engine intuitively isolates animals, vehicles, people and food, plus countless other elements for advanced secondary grading and effects application. The new depth map effect lets customers instantly generate a 3D depth matte of a scene to quickly grade the foreground separately from the background, and vice versa. Customers can bring attention to action in the foreground, help interview subjects stand out, or add atmosphere in the background of a scene. Apply graphics to surfaces that warp or change perspective in dramatic ways, like t-shirts, flags, or even the side of a face with the surface tracker. It’s customizable mesh follows the motion of a textured surface. Apply graphics, composite tattoos, or even cover up logos with this powerful tracking tool. Ultra beauty gives customers advanced control over a subject when performing corrective beauty work. Developed with professional colorists, the ultra beauty tool helps to address general imperfections by smoothing skin and then recovering detail to produce natural and complimentary results to the subject. Subtitle support has been expanded to include TTML and XML timed texts and embedded MXF/IMF subtitles. View and import subtitles from media storage, create regions to support multiple simultaneous captions per track, and set individual presets and text positions when indicating different speakers. Transitions in the effects library’s shape, iris and wipe categories now have a checkbox, allowing customers to easily reverse the direction of the transition. This gives customers additional flexibility when using these types of transitions, as well as adding to their creative possibilities. A new 5x5 option in the multicam viewer now allows customers to view up to 25 different angles in a single multicam clip at the same time. Ideal for large multicam projects, this makes viewing, cutting and switching between more angles much easier, rather than moving between pages to see different angles. In DaVinci Resolve 18, GPU acceleration allows paint brush strokes to be generated and displayed in real time, for a more intuitive approach when performing cover up work or graphic design. Instant visual feedback allows customers to assess their work and make corrections in any stroke style or shape. Text, text+ and shape templates have improved speed and playback performance in DaVinci Resolve 18. New memory management and data handling means that Fusion templates are up to 200% faster. Customers can see accelerated results in the viewer and put together motion graphic compositions faster than ever. FlexBus is Fairlight’s flexible audio busing and routing system designed for managing high track counts, extensive plug-in processing, perfect synchronization and multiple project deliverables. Now customers can effortlessly convert legacy fixed bus Fairlight projects to FlexBus with a single click. The Dolby Atmos deliverable toolset has been expanded to support rendering of a binaural output from a complex Dolby Atmos mix. Now a Dolby 7.1.4 mix can be rendered to playback in a pair of headphones while maintaining the immersive sound experience from just two audio channels. New options in the decompose menu enhance collaboration allowing editors to compile their work to a single timeline. Nested timelines can now be decomposed with all track data including FX and automation. Assignments will connect using new busses, existing paths or new tracks can be left unpatched. DaVinci Resolve supports the latest industry standard audio formats natively, including immersive audio formats like Dolby Atmos, Auro 3D, MPEG-H, NHK 22.2, and SMPTE. The space view scope displays a real time view of every object and its relationship to the room and other objects in 3D space. "This is a major release that totally revolutionizes remote project collaboration using cloud based workflows, " said Grant Petty, Blackmagic Design CEO. "With Blackmagic Cloud, customers can collaborate on the same timeline anywhere in the world. Imagine editing in Tokyo, while a colorist is grading in LA on the same timeline, at exactly the same time! The new DaVinci proxy workflow makes working with proxy files or camera originals seamless, relinking in just one click. I think it will be exciting to try out the new cloud collaboration workflow and I can’t wait to see how our customers collaborate with each other around the world." DaVinci Resolve 18 Features Support for Blackmagic Cloud to host and manage cloud based project libraries. New Blackmagic Proxy Generator App automatically creates and manages proxies. Ability to choose between working with proxies or camera original files. Support for intelligent path mapping to relink files automatically. Improved project library performance for private server. New object mask recognizes and tracks movement of thousands of objects automatically. New depth map generates 3D depth matte of a scene in DaVinci Resolve Studio. New surface tracker for tracking warped surfaces in DaVinci Resolve Studio. Refined ultra beauty tool in Resolve FX beauty for advanced corrective work. Expanded subtitle support for TTML and XML timed texts, and embedded MXF/IMF. Support for reversing shape, iris and wipe transitions in the edit page. New 5x5 multicam enables viewing of up to 25 simultaneous different angles. Faster GPU accelerated paint tool with smoother strokes. Support for live previews when using the Text+ color picker. Ability to convert historical fixed bus projects to FlexBus in project settings. Improved Dolby Atmos immersive mixing, including Binaural monitoring. Decomposition of nested timelines with all track data including FX and automation. Innovative space view scope in Fairlight shows position and relationship in 3D space. Availability and Price DaVinci Resolve 18 public beta is available now for download from the Blackmagic Design web site.
  9. Well, now we have a little more information. Consider what I said: the fact that you can export it without the pillerbox, in a true 4x3 aspect ratio. I'm often bewildered why clients do what they do, and sometimes terrible damage is done before the colorist inherits the project. I concede that sometimes we can't stop them from harming their own projects through ignorance. If it were me, I'd ask them to give me the original 1920x1080 material, color correct that, and then give them back whatever aspect ratio and resolution they want. If you have full control of the material without mattes baked in and so on, it'll help avoid doing more damage to the final result.
  10. This entire message made my head hurt. I have a lot of "why did you do that?" kinds of questions, but we'll be here all day. What software created the 4x3 letterbox? Why not just work with 4x3 resolution (1920x1440) and no pillarbox, and then use SuperScale to double it to 3840x2880? What kind of streaming service or broadcast channel will play 4x3 in 2022? As far as I know, if you import 1920x1080 files into a 3840x2160 project, it's exactly centered to the pixel and nothing is cut out, unless your import settings or PTZR values are bad, or if there's an Edit Page sizing issue.
  11. There are some good tutorials out there for on the net "Twin-Tone Looks" and things like that, and Lowepost has some good Look Development tutorials that I would recommend. So much depends on the nature of the original material, lighting, and exposure, that there's no one way or a "best" way to do it.
  12. Yeah, I have to say there's probably about 4 or 5 different ways to do it. One issue I've had is when suddenly there's a "flesh-colored" surface behind or near one of the actors, and suddenly a teal-look isn't possible for the background. I've had to explain to directors, "this is an art direction problem: you need to give me something to work with around the actor, and not pick a wall that's too close to the actor." We wound up rotoscoping the actor a bit in order to force more separation into the shot, but it was a bandaid fix at best and I'm not proud of the results. But: the director was pleased that I'd at least made the effort, and we moved on. That's a valuable lesson I've told students before: never tell a client NO. At the worst, you could tell them, "that's a challenge because of this specific situation, but let's see how close we can get." Sometimes, they realize after a few minutes that you've gone down a rabbit hole and it could require lots and lots of time, and they'll take what we have and go on to the next scene. The key note is that Orange & Teal is not automatic, and a LUT alone won't do it. I had one memorable scene a few years ago where prisoners in a jail cell were just naturally orange & teal, and it worked great with almost zero effort. I used to get a little antsy about putting color in the shadows, but sometimes it's warranted and it can work to a point.
  13. Well... you could do that with a Post-Clip Grade. Group all those clips together and then apply that look with the Group Grade. Another approach I use on documentaries is to sort all the clips in C-Mode (that is, by camera and timecode), and generally that brings up all the similar clips together. For example, all the clips of "Mr. Smith" would be together in timecode order, all the clips of "Miss Jones" would be together, and so on. Color correct the first shot and apply it to all the same shots that follow. As long as the camera operator didn't change exposure, it can work. If this can't be done because of timecode conflicts or a flattened file, use Metadata to name the person in the clip. Now create a Smart Filter that looks for "Name = Mr. Smith" (or whatever, and all the matching clips will pop up in the current timeline. Ideally, you'd have an assistant willing to go into the metadata to enter all that information, because it is a laborious task, but once it's done it's fantastic to work that way.
  14. You may be going at this in a much more complex way than the people at CO3 did it. Bear in mind that a lot depends on Art Direction and the original photography, and it's hard to force a look on a shot that doesn't lend itself to that approach. Veteran LA colorist Stefan Sonnenfeld is generally credited for popularizing the so-called Orange/Teal look, and he was doing it in the mid-to-late 1990s for commercials and home video projects on the old daVinci 2K. The 2K didn't lend itself to extreme secondaries or keys the way Resolve does now, and he actually did a lot of it through just careful Primary manipulation. You can argue now that it's possible to use layer mixers to retain more of the original skintone from the initial shot, then change everything else to cyan and use secondaries (or Color Warper) to minimize magenta. Basically, what you're looking for on the Vectorscope is a straight line from a little to the left of Red and then directly down to the Cyan box on the display... and not much else. Look at Transformers and a number of other Sonnenfeld features, and you can at least appreciate the consistency of what he does with the image, both on scopes and on the monitor. @Stefan Ringelschwandtner has reverse-engineered some of these grades by taking the results, then pushing them back to a "Normal" look, which I think is a very imaginative thought process. His Mononodes are actually very interesting, but I'll let him comment further. BTW, lest anybody criticize Stefan for this look, know up front that the ultimate decision is always made by the director and DP. It's our job as colorists to merely give them options to choose from. If they want orange/teal for maximum color contrast, then so be it -- it's their film. It's not a look I always like, but you can't deny the success of CO3 or Sonnenfeld.
  15. Read these: "Grading for Mixed Delivery: Cinema, Home, and Every Screen in Between" by Cullen Kelly "How to Deal with Levels: Full vs. Video" by Dan Swierenga and "A Deeper Look at Consistent Color with QuickTime Tags From Resolve To YouTube & Vimeo on Wide Gamut Apple Monitors" by Dan Swierenga and I think they cover the issues and the solutions very well. Understanding color management is also helpful: "Color Management for Video Editors" My simple method: always export a second or two of SMPTE color bars at the very head of the project, and then check them on scopes in whatever player you're using to see how it looks. If there's a shift (video level or hue or chroma), you'll see it very quickly in bars. Note that the same image will look different on different browsers, different operating systems, different laptops, and different desktop displays. It's even worse when the displays are not calibrated. Sound has similar issues.
  16. It will be part of the Feature Masterclass now being worked on. I can't do features (or long-form television) without a Fixed Node Tree -- it helps us work much faster to meet the client schedules. Typical episodic TV series in LA take a maximum of 2 days (20 hours if you're lucky) for color timing, and I'd say indie films are often around 50-60 hours tops. I try to push it and get 80 hours when I can, but it's very budget/schedule driven. The Fixed Node Trees help the colorist apply looks across scenes with numerous cuts, and allow transporting looks from one part of the film to another. I'll go into detail in the tutorial (assuming I survive the year).
  17. And here's a big article that appeared today on's website featuring me, talking about why classic films like Star Wars, The Sound of Music, The Matrix, and many others wind up looking differently on home video than they did in the theater: Art and Compromise: Who Crafts The Look of Remastered Films? Be warned it's a long article, but it is thorough. (They didn't use the picture I sent in, but we'll try to get that updated.) I will have a forthcoming Master Class on Feature Film Color coming up in a couple of months (or sooner) on Lowepost.
  18. Film scans are tough because the standards are loose in the film scanning business. In Resolve, sometimes I start with a CST node decoding from Cineon to Gamma 2.4 can help. If it's too much, I start with a Printer Light (Offset) node to adjust the overall density, then an S-Curve to tame the exposure and add contrast, After that, I have nodes for Balance, Gain ("Gain1"), and a Gain trim ("Gain2"), and usually by that point I can make a decent picture assuming Rec709 delivery. If I see a bias towards pinkish-reds or excessive yellow, I'll take care of that with secondaries right after that. Note that camera negative (OCN), internegative (IN, which is a copy used to make prints), interpositive (IP), and prints all have different looks and require different tactics for final color. I worked on more than 46 film features in 2021, and each one took anywhere from 30 hours to 70 hours, about 50 hours each on average. The ones from camera negative are the most difficult, since this is effectively the "raw" image shot by the DP and developed by the lab. It takes time and effort to tame the image and give it a reasonable, dramatic look. In many cases, we have an older SD or HD home video release as a reference, and I'll make a judgement call on whether to match it exactly or just get reasonably close to it. Of course, if the old home video releases looked awful, I'll toss that and just go by my best instincts. My feeling is that digital is EASIER to work on, because you have a bit more range... but the client also depends heavily on you to establish a look, which wasn't necessarily the case with film. I always concentrate on telling the story as best I can, but at the same time respecting and preserving the work of the DP and not "stepping on" it too much. When in doubt, we go for less processing... but I will do a little bit of relighting when there's an obvious on-set problem (glare in the background, actor who misses their mark, an important plot detail that didn't get lit). I'll have more to say about this in an upcoming Master Class on Feature Films, which is now being worked on.
  19. I see your point, but I don't agree. Think of Printer Lights as a way of adjusting LGG Shadows and Gain simultaneously. It has its uses, and you don't have to necessarily do it at the end of the chain. I agree it's more useful with Log images, but I honestly use it on a lot of stuff. In particular, when I have an operator who changes the lens exposure in the middle of the shot (which is pretty much a logarithmic change), I find a keyframed Offset usually will fix it, or at least minimize it so it's not too noticeable. So there's a lot of "it depends" to this. I'm very much a "if it works, it works" kinda guy: there's a lot of different ways to approach things, and often there are no absolutes beyond making the client happy and not winding up with a picture that looks stressed-out or distorted.
  20. We love the G-Tech "G-Speed" RAIDs, and they've been very reliable for us. The spinning drives get at least 1000MB/s, and the SSD models will go over 2000MB/s.
  21. Technically, it is more time but we'll just stack up the renders and I'll hit the button on my way out the door at the end of the day. If it takes 6-7 hours to do them all, it doesn't matter: I'll be safely home in my bed. We tend to work in reels, so the next day I'll stitch them all together (assuming it's a feature), but I make sure I check off the "Bypass Re-encode When Possible" option is turned on on the Delivery page. Usually I can get a flattened single file out of the 4-5 files in faster-than-real-time by the moment I get back into the office, maybe 45 minutes for a 2-hour film in 4K ProRes 444.
  22. We use some temp SNR noise reduction while we're color-correcting the show, to give us an idea as to what it'll look with NR. Then we turn off the temp NR for the render to a mezzanine format like ProRes 444 or 444HQ. Then we take the color-corrected (but no NR) mezzanine version and run it through Resolve again, with only Neat Video activated in a single node. We come up with 7 or 8 different settings for different kinds of scenes -- day interiors, day exteriors, night interiors, night exteriors, super-dark scenes, super-bright scenes, problem scenes -- and manually split the clips and add the NR-only correction from a PowerGrade bin. It won't run at speed, but we do before/after comparisons to make sure it looks good. If the shadows need to be adjusted -- they sometimes wind up a little high after NR -- we lower them. Once that's good, we render out what we call a "cc_NR" version (color-corrected noise-reduced), and that's what gets delivered as the final. We hang on to the "cc" (mezzanine) version in case there's any issues. This method has worked for at least 11-12 projects so far, including one I did last week. It does take more time, so it helps to have a fast computer. I just set up a whole stack of renders on the Mac Pro, kick them off at the end of my shift, and it chugs all night until they're done. I set up the OS to turn off the machine after X number of hours, knowing it'll be done by that time.
  23. Some books you can check out in order to learn more about color science: VES: "Cinematic Color" (free 52-page white paper) "Color & Mastering for Digital Cinema" by Glenn Kennel "Digital Cinematography: Fundamentals, Techniques, & Workflows" by David Stump "Color Reproduction in Electronic Imaging Systems" by Michael Tooms "Digital Video and HD: Algorithms and Interfaces" by Charles Poynton "The Reproduction of Colour" by Dr. R.W.G. Hunt "Color Mania: The Material of Color in Photography and Film" by Barbara Flückiger "Colour Cinematograph" by Adrian Cornwell-Clyne Chapman & Hall The best book on Film Lab color-correction I've ever read is this one: "Film Technology in Post Production" by Dominic Case The latter explains how film was color graded in the laboratory prior to television and digital. Some of the basic principles still apply today.
  24. Me personally, I try to only do NR after the initial correction, about halfway through the node tree. You can cache there and then still make subsequent trims, keys, windows, OFX plug-ins, and so on, and it won't slow you down. Having said that, when I encounter significant noise, we usually turn to Neat Video and render the whole show twice: once without any NR, and then a second pass with NR added on a scene-to-scene basis.
  25. Working on it! Note there are pros and cons with this approach, and there are occasions where you don't want a Fixed Node Tree. But for longform features & TV, they're ideal for working quickly and keeping a consistent look across many scenes.