Julien Souloumiac

Premium
  • Posts

    19
  • Joined

  • Last visited

Course Comments posted by Julien Souloumiac

  1. On 10/23/2019 at 10:16 PM, Lee Lanier said:

    Yes, you can track with the Tracker and then apply the tracking data to a Paint node or a Transform node connected to the Paint node. Sometimes, I just find manual tracking faster, so it's good to have both options.

    Hi Lee, 

     

    First of all Hope you're fine in such a complicated period. 

     

    I'm digging deeper into both Nuke and Fusion and now achieve some interesting results thanks to your lessons. 

    But I still encounter many issues when trying to connect tracking data, as you already said before. 

    Right now I was trying to connect tracking data to a paint node, used to clone a blemish, and encountered this usual and buggy shift when using tracker unsteady position... 

    I managed to find a solution modifying center with an XY Path and then, in the modifier, connecting Center to Tracker Unsteady position... 

    Although quite complicated, this workflow seems, at least, logical, and works on my example... 

    Do you think that can be a relevant option to get rid of this buggy shift when connecting tracking data ? 

     

    Thanks a lot for your answer, 

     

    Best

     

    Julien

     

    Ps: Was just checking, this seems to work when tracking a Paint Node, but when using an ellipse or any other shape, your solution of an intermediate Transform Node seems more efficient... Endless issue maybe... 

     

  2. hi Lee, 

    here I come again with a few more questions as I'm know working on the tracking tools. 

    We already saw that a Transform tool is needed to connect a tracker unsteady position to a mask. 
    But I recently encountered a situation where using a BSpline, connected to a Transform tool which center was connected accordingly didn't work (while connecting directly the BSpline center to the tracker path worked fine). 
    Is that a normal behavior for the BSpline tool as opposed to other masks, or is that a bug ? 

    Moreover, I just read this from Brian Ray, who suggests a different approach for tracking mask :

    "Plug the output of the Ellipse into the Foreground of the Tracker. Leave the Background connected to the footage. Then select the Tracker node and switch to the Operation tab. Set Operation to Match Move; the Merge panel will appear. Set the Merge to FG only. In this mode, the position data from the single tracking point will be applied to the pixels in the Foreground input, and only the Foreground will be sent to the output.

    Put the Tracker in the Viewer. You will see the white circle move about just like the barrel of the gun.

    The next step is to use make a Color Corrector (CC) node, attach it to the footage, and put the output of the Tracker into the Mask (blue) input on the CC. Any tool in Fusion can be masked in this fashion, restricting its operation only to the parts of the image where the Mask is white."

    http://www.bryanray.name/wordpress/blackmagic-fusion-tracking/

    Do you have any comments/thoughts about the benefits of these different methods ? And would you clarify the difference between the various connections that can be used (steady, unsteady, steady axis etc) since I assume it might be part of the answer.

    Thanks a lot again for your help

    Julien

  3. Hi Lee and thanks a lot for your answers. 

    Actually that's exactly my point of view : I'm also quite sure it's possible, but not sure that's the most efficient way to deal with animated source. We have so many VFX and Color tools that this technique might be totally irrelevant or inefficient.

    Coming from a photo retouching background, I still find it interesting to investigate. My partner (who doesn't have any video background, if I do...) fantasizes an animated DB layer, so... As I can afford some time running tests about this, I find it an interesting way to improve my knowledge of Fusion and take advantage of all your lessons ;-). Even if at the end I may consider that's not an efficient workflow.

    Nevertheless, I don't want to bother you with irrelevant questions about possible irrelevant techniques, I really appreciate your help and background about this, that's a great help !

    (and if I can suggest, this might be a very interesting insight/lesson about your in-depth experience of a professional workflow involving different professionals in a production point of view)

    Meanwhile I checked the various links you shared about PSP and Fusion Maths, and it's quite difficult to conclude about them. Different sources suggest different formulas, so... As far as I understand, Overlay maths seem the same, Softlight is unclear. 

     

    I will take some time to run more tests and send you a clean comp, 

     

    Once again thanks a lot for your help and great work !

     

    Best

     

    Julien

    • Like 1
  4. hi Lee, 

     

    thanks a lot for your answer and for these documents. I'm gonna have a look at these and send you a clean comp. 

    By the way, do you think this can be a relevant approach for Dordge and Burn work ? 

     

    Thanks a lot 

    Julien

  5. hi Lee, 


    Hope you're well. Thanks again for your work. 

    Following these various courses, I'm now trying to emulate a Dodge And Burn Technique used in Photoshop for some beauty enhancements 

    (I work as a photo retoucher that's why I'm trying these techniques. But you may consider that's not a relevant choice for video stuff. Don't hesitate to correct me if you think I'm looking in the wrong direction 😉

    As you may know, in Photoshop, DB is easily achieve with a neutral grey layer set in SOFTLIGHT (that's an important point, as you may see below) blending mode, on which you can paint in white zones you want to lighten, and black where you want to darken. Same results can be achieved with to curves layer, one for dodging, one for burning, but we usually find it faster to use the neutral grey layer (you only have to switch for white to black when working...).

    Let's go back to Fusion now. I can easily create a grey layer and paint it as I wish, and planar transform it to fit my source. I can also load a neutral grey layer created it PSP and planar transform it. I can then merge this on top of my background which might give the expected result.

    My problem there is about SoftLight blending mode in my merge node. 
    Actually, Fusion doesn't seem to use the same maths than PSP for SoftLight : In PSP, a neutral grey layer set to softlight doesn't affect the pixels' luminance. Only the painted zones are affected. 
    In Fusion the same softlight neutral grey layer affect the whole image luminance. Overlay Blending seems more consistent with PSP blending modes, but also has stronger effects and need to be seriously reduced...

    Do you have any hints about this particular issue, or maybe about the maths used in Fusion blend modes ? 

    I may also have another question about creating animated matte in Fusion and using them as external mattes in Resolve color page. On a few tests I successfully created planar transformed masks for some very interesting results, except that I encountered a tilt shift on the 10-15 first frames of my external, that doesn't occur in Fusion... But I need to run some more tests cause it might be an export settings issue...

    Thanks a lot for your help if you can, 

    best

    Julien

    ps : as mentionned previously I work with Fusion Studio standalone, definitely more stable than Resolve... And I can share my various comp attempts if needed

  6. hi Lee, 

    thanks a lot for your answer, I'll check that out and run some tests to find the best workflow according to my needs.

    I have another question that may interest some people, concerning the best workflow between Fusion and Color Pages. 

    To make it simple, how do you deal with ungraded Log (or raw) footage when asked some beauty work ?

    Do you first grade them, then export an hi-res graded version, and eventually import this graded version to work in Fusion page ? 
    Or do you use some fusion color transform tool to "normalize" your loog footage, work on it and then achieve color work in the color page (which might not work with raw footage) ?
    Or is there any other trick to simplify these kind of interactions ? 
    I konw for exemple that, with a new media out, you can extract an alpha matte from the Fusion page. Does this work with animated matte ? Can this be a way to deal with color correction in the dedicated page after building a matte taking advantage of the Fusion tools ? 

    I hope I don't bother you with all these questions, and that it may help other users to develop an efficient and high-quality workflow. 

    Once again thanks a lot for your help and work, 

    Best 

    Julien

     

    Ps : I also noticed that the standalone version of Fusion is far more stable that the Resolve implementation and I'm looking for the best way with these two softwares. Do you have any advice about round-tripping between Resolve and Fusion stand alone ? Thanks a lot 😉

  7. Hi, 

     

    Thanks a lot for your answer, and for all your work. 

    I understand that this course is almost over, and I have a few questions/requests if possible :

     

    - Is there any chance you can add an extra-course on how to handle warping on a "real" background (not a clean plate) ? I have a few ideas about a node structure to handle that but your experience will definitely help.

     

    - in the same way, is there a possibility to add a lesson on how to emulate dodge&burn technique from Photoshop in Fusion and/or Resolve Color Page ? I'm quite sure that can be handle only with power window, but I'm also looking for the most effective workflow and suspect Fusion can be used for that...

     

    Once again, thanks again for your work,

     

    best

     

    Julien

     

     

  8. Hi, 

     

    Thanks a lot for your answer, that's a very interesting point. 

    I have another question : when I follow your instructions concerning node structure and tracking data, my ellipse center may switch on the y position sometimes. I already encountered this several times connecting tracking data. Is this a known issue/bug, or am I doing something wrong ? 

     

    I can post a screenshot to make things clear if needed. And I was working on free beta version, didn't try on my studio version...

     

    Thanks again for your work,

     

    best

     

    Julien

     

     

  9. Hi, 

     

    Thanks a lot for this. 

    Can you just explain the use of the transform node in lesson 2 ? Can't we just right click on the center of the ellipse node and connect it to  tracker1 ?

     

    Thanks a lot for your answer and for this great job, 

     

    Julien