Bruno Mansi

Premium+
  • Posts

    330
  • Joined

  • Last visited

1 Follower

About Bruno Mansi

  • Birthday 03/17/1954

Personal Information

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Bruno Mansi's Achievements

Rising Star

Rising Star (9/14)

  • Conversation Starter
  • First Post
  • Collaborator Rare
  • Week One Done
  • One Month Later

Recent Badges

347

Reputation

  1. You might want to take a look at the Look Development & Workflow course (Jason Bowdash) on Lowepost. In lesson 13 (Complex Saturation Workflows) he uses LAB and HSL/HSV workflows to reproduce the sort of colour depth that's being discussed in this thread. 292xap87.75.54.224
  2. Going back to my still photography days, density was a term used to to describe the amount of silver (and hence the amount of dye in colour stock) in a negative. Underexposed negatives were called 'thin', whilst adding one or two stops of light above the optimum exposure would produce a denser negative. Since film had a good exposure latitude, it wasn't uncommon to overexpose by a stop or two. As long as you weren't blowing out highlights, this technique could capture more detail in the blacks and produce richer images due to the increase in silver/dye. However, this wasn't without it's own problems as colour film could produce hue shifts when subjected to overexposure. Fujifilm, for example was known to lean towards magenta when overexposed. As others have stated, this characteristic of film density is being translated to digital images as a way of mimicking the film look.
  3. Not seen the course myself, but I've heard good things about the Tom Cross Masterclass. See these links... https://www.mzed.com/educators/tom-cross
  4. Mixing Light has some good material on HDR. Most of it is behind a membership pay wall, but they offer a seven day trial. In fact, I found a free one on their site which I think is the sort of thing you're looking for. Admittedly, it's dealing with Dolby Vision, but it does talk about making SDR trims. https://mixinglight.com/color-grading-tutorials/getting-know-dolby-vision-hdr-part-2/
  5. My understanding is that the LG OLEDS can be decently calibrated for SDR work, but are unsuitable for any sort of accurate HDR grading. It's due to the fact that they're WOLEDS, and at higher light outputs the saturation drops off to an unacceptable level. I believe this topic has come up before in this forum and on the LGG forums. I seem to remember reading that the likes of Amazon and Netfix won't sign off on a HDR grade on these monitors.
  6. You need to think about whether speed or space is your primary concern. The OWC, with it's USB Gen3.1 interface is quoting around 400MB/s, but I'd be surprised if you get this with two spinning hard drives. It's more likely you might achieve this with SSD drives. The G-technology is only USB 3.0 and quotes specs around half the speed of the OWC. If you're simply working at HD, then these interfaces should suffice, but if you want to work with 4K material, you're likely to need more speed. Things like Resolve caches/renders really benefit from fast storage (like NVME storage). You might be able to use your internal SSD for these caches, which should achieve read/write speeds in excess of 2000MB/s Having Thunderbolt 3 storage with SSDs is going to achieve the best speeds. If you only need a TB of storage, then getting something like the Samsung X5 (with it's Thunderbolt 3 interface) would give you speeds of around 2500MB/s Puget have an article about hardware recommendations for Resolve, and include a section about storage. Have a look at... https://www.pugetsystems.com/recommended/Recommended-Systems-for-DaVinci-Resolve-187/Hardware-Recommendations
  7. All depends on your budget. Your Thunderbolt port will allow you to connect anything from a simple external SSD drive (1 to 4TB) right up an external RAID enclosure with 6 or 8 Drive slots. The cheaper SSD drives will be SATA, so will give you around 500 - 1000MB/s. A proper Thunderbolt SSD should give you speeds similar to your internal drive. A Thunderbolt RAID system will give much better storage capacity and performance , especially if you populate it with SSD drives, although this starts to become quite expensive. RAID systems can also offer protection against a drive failure if you're happy to trade some performance and storage capacity. Look at manufacturers such as Promise or OWC.
  8. It's the Artist Color that is EOL. The Artist Mix panels and Eucon are still supported. You're asking a lot for software that was never designed to run on a M1. I'm assuming the Eucon software is using Rosetta to function on your machine. This may well be introducing problems. Since Avid are still supporting Artist Mix panels (especially with Protools), it's likely they'll get around to improving the Eucon software. One thing I would do is give your panel a static IP address. It's set to DHCP by default, so could change it's IP address if switched off for a period. Although Eucon should still identify the panel after an IP address change, it's one less thing to complicate matters. The whole M1/Avid issue is constantly being discussed on the Avid forums. There's some info that Avid is publishing on the the current state of support for M1/Big Sur, which might be useful reading. Link to article... https://avid.secure.force.com/pkb/articles/en_US/Compatibility/macOS-Big-Sur-Support
  9. The way I read it is that an OOTF is essentially a combination of an OETF and an EOTF. I don't see why you would use such a transfer in normal situations. It seems to suggest that HLG is an encoding that has both a scene and display referred relationships, and therefore seems to require an OOTF. Maybe a forward OOTF is when you're going to HLG and a reverse OOTF is when coming from HLG? I'm really clutching at straws here!
  10. There does seem to be a lack of information about these options. I wasn't even sure what the acronym stood for, but it seems to be 'Optical to Optical Transfer Function'. Still looking, but I found this in a technical article on colour science... Output-referred spaces have only an Electro-Optical Transfer Function (EOTF) which defines the relationship between code values (CV) and display light. This may be defined relative to display peak luminance, or as an absolute encoding in terms of display cd/m2 . Scene-referred spaces have only an Opto-Electrical Transfer Function (OETF) which defines the relationship between relative scene light and the encoded value. Some encodings, such as Hybrid Log-Gamma (HLG), define a relationship to both scene and display light, so have both an OETF and and EOTF. An EOTF is not necessarily the inverse of the corresponding OETF. The combination of the two is referred to as an Optical to Optical Transfer Function, or OOTF. Also found this.... Before we proceed, it must be reiterated that there are two aspects of Hybrid Log Gamma – a Scene Referred aspect which is encoded directly from a Camera and a Display Referred aspect which is relevant for displaying on a TV/Monitor. The key differentiator is that in a Display Referred context, HLG normally includes an opto-optical transfer function (OOTF) which transforms Scene Referred content into Display Referred content, such as an HLG 1000 NIT output from a HLG recording. Scene referred content on the other hand is as encoded directly from a camera. When we import footage recorded from a camera, this is almost always Scene Referred. Not sure this makes it much clearer, but is it suggesting it would need to be used if creating HLG deliverables? I know the BBC were experimenting with HLG as a solution for terrestrial broadcasting, but I don't know if anyone on these forums has actually worked with this HDR standard.
  11. Sohonet seems popular with high-end post houses here in London UK. One of the issues with any online storage is decent upload speeds. Many ADSL broadband connections have abysmal upload speeds. Sending 300 GB when your upload speed tops out at a couple of Mbps is going to be painful!
  12. I wouldn't have thought Blackmagic would have supplied training footage that only worked with Resolve Studio. It's likely that most people doing the training would be using the free version. I've seen the footage from the beginner's guide and it mostly seems to be ProRes. It it possible that the footage is Avid DNxHD? I know that you used to have to get those codecs directly from Avid. They're free of charge - just search for LE codecs. on the Avid website.
  13. If you're talking about the film emulation LUTS included in Resolve, Darren Mostyn covered this recently on YouTube.
  14. There's lots of tutorials on YouTube with titles like "Five Mistakes Amateur Colourists Make" and "The Pro Colourist Secret to Beautiful Cinematic Images". It's trying to sell the idea that there's a right way and all other ways are inferior/wrong. I appreciate there's a certain amount of click-bait going on here, but looking at the number of 'likes' and general comments, a lot of people are being sucked into this way of thinking. One of these 'secret-sauce' ideas is that by working in ACES, you'll automatically achieve the so-called cinematic look. It seems to me that every pro colourist that takes the time to comment on their workflow, will have a different way of achieving their goal. It's about developing your own style and methods to achieve the results that clients want. Unfortunately, many aspiring colourists don't want to hear this as it involves patience and a few years of 'skill-honing' (is that a word?) They want the industry secrets they think top colourists jealously guard. I'm not suggesting that you can just grade in any haphazard manner. There's much that professional colourists can teach about being methodical and efficient in your approach, but that doesn't have the wow factor that pulling out some fancy LUT that will do all the work for you.