Now that high-end cameras are becoming cheaper and more accessible, raw formats are increasingly utilised. So what exactly are they?
In simple terms camera data (when light passes through the lens it hits the sensor and is then digitised as 1’s and 0’s) is encoded and transformed into RGB video clips inside the camera. Higher-end cameras have the option of bypassing that process and recording the data in raw formats.
Raw (not an abbreviation) is as a close to what the camera sensor sees as possible. But it is partial information that requires a filter to be applied before a full RGB image can be reconstructed. This “debayering” process is done inside your computer instead of in-camera, and it is done every single time you press play.
This can be a very intensive process for computers, which is why you often need a powerful computer to handle it. Modern GPUs are able to do that with the performance being dependent on how much information it has to translate into a RGB image (resolution and bit depth can add to it), with this sometimes varying from clip to clip. This is known as a variable bitrate.
Software like Davinci Resolve allows you to control the quality of the debayer process which results in less information for your computer to translate. This results in a poorer quality but more manageable image.
In addition to that raw footage can sometimes be compressed, with this typically (but not always) using a lossless approach that won’t degrade the information. This is something often done with Red cameras (as used on Guardians of the Galaxy and others) and increasingly with BMD cameras.
If this is the case then your computer has to decompress and THEN debayer (translate into RGB) the footage. This is an extra step that can tax your computer even more.
Some GPUs are terrible at decompressing footage which is why it can be very hard for one to do both steps, as is needed with Red footage. In fact CPUs can sometimes be better at that (but worse at debayering) resulting in better performance when combined with a GPU than just a solitary (higher-spec) GPU by itself (I hope that makes sense).
Exceptions to this include the Red Rocket X graphics card made by Red to specifically help process Red footage. This bit of hardware can decompress and debayer footage amazingly well. BUT…
It struggles with the latest footage from their own (Red) Helium cameras. Given this little fact I suspect they’ll be releasing yet another card in the near future to handle the next generation of raw footage.
IF COMPRESSION IS HARDER THEN WHY USE IT?
The higher the resolution and the greater the density of information per frame of footage the larger the files become. This results in a situation where we need large AND very fast amounts of storage.
I won’t go into specifics but all varieties of non-compressed raw take up a significant amount of space. It also needs to be fast enough to read and write all of that data in real-time with camera manufacturers recommending specific recording formats, and some (like Arri) offering their own.
This is why as a data wrangler and DIT I will make use of enormous RAID arrays to store a project.
Now imagine if you could get away with using a quarter of that. You save money. On the downside you have to invest more in post-production hardware to handle the additional processing requirements.
I’m personally considering building a Linux workstation running Davinci Resolve to handle the new super high-res raw footage at full quality, with that software specifically chosen because it can support up to 8 GPUs simultaneously (Linux version only).
Network that together with high-speed storage and a MacBook Pro and you’ll have quite a nice set-up.