Alexander Starbuck

Members
  • Posts

    5
  • Joined

  • Last visited

Posts posted by Alexander Starbuck

  1. Hi all,

    Recently I got the new MB Pro M1 Max and am using it for my photography and video editing work. No color critical work work here, just a huge desire to properly learn image mastering workflows, so I was wondering:

    • how accurate, if at all, are the Apple display presets (BT.709, P3, ...);
    • when mastering for web, should I switch my display fro Apple XDR to BT.709, dim the room and master like so;

    If the second answer is yes, meaning I should switch to BT.709, how would I handle the issue where most smartphones and tablets increasingly implement some flavor of P3 color space? I am struggling to make things look decent on my Macbook, iPhone 12 Pro Max and an older TV from the early 2010's.

    Many thanks for any tips! :)

    (P.S. I am somewhat educated on the proper worflows and pipelines - I religiously follow Cullen kelly's work but am unsure of how to approach using the Apple hardware. Possibly an external BlackMagic card with a ref. monitor would be the way to go, but for now unfortunately, out of budget)

  2. Hi all,

    I started diving a bit deeper into color management only recently and have settled on a manual approach, using pre/post-clip groups and various color space transforms. I avoid automation as much as possible (like project-level color management) to learn as much as I can. Another reason for this approach is that I am using a Fuji camera for which there is no support in ACES.

    Today I tested  the camera's low-light abilities by recording a few files internally as a h.265 compressed 10-bit 4:2:0 files, with f-log profile applied. Jut for the fun of it, I recorded the same number of files with exact same settings to my Atomos Ninja V recorder, as DNxHR HQ encoded 8-Bit 4:2:2 no-profile applied files.

    Once I got into Davinci, I realised that I do not know how to properly set-up color space/gamma curve for my DNxHR files. :) Does the external device (recorder) receive the camera signal and record it into a codec/container different to the one that the camera saves? This would essentially make the files input settings the same as the Fuji files (Rec.2020 and Gamma F-log). The issue is that by doing so, the image from the Ninja V looks signifficantly more contrasty, especially in the lower ISO ranges. I don't know if this is to be attributed to me choosing the HQ (which is 8-bit) or something else is at play here.

    Many thanks for all the replies!

    Alex

    P.S. If it matters, I hndle my CSTs like so: 

    pre-clip group Node 1: Rec2020 F-Log -> Arri LogC, Node 2: Arri LogC -> ACES cct

    post-clip group: ACES cct to Rec.709